status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,030 |
Docs: Add code-block wrappers to code examples in network_debug_troubleshooting.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `network_debug_troubleshooting.rst ` file in the Network Advanced Topics Guide (`docs/docsite/rst/network/user_guide/`), there are 2 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_collections_distributing.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is wrapped in a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79030
|
https://github.com/ansible/ansible/pull/79034
|
ff6e4da36addccb06001f7b05b1a9c04ae1d7984
|
57f22529cbb9bbcb56ae3e8d597fb508dec409a1
| 2022-10-05T10:57:48Z |
python
| 2022-10-05T11:46:08Z |
docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
|
.. _network_debug_troubleshooting:
***************************************
Network Debug and Troubleshooting Guide
***************************************
This section discusses how to debug and troubleshoot network modules in Ansible.
.. contents::
:local:
How to troubleshoot
===================
Ansible network automation errors generally fall into one of the following categories:
:Authentication issues:
* Not correctly specifying credentials
* Remote device (network switch/router) not falling back to other other authentication methods
* SSH key issues
:Timeout issues:
* Can occur when trying to pull a large amount of data
* May actually be masking a authentication issue
:Playbook issues:
* Use of ``delegate_to``, instead of ``ProxyCommand``. See :ref:`network proxy guide <network_delegate_to_vs_ProxyCommand>` for more information.
.. warning:: ``unable to open shell``
The ``unable to open shell`` message means that the ``ansible-connection`` daemon has not been able to successfully
talk to the remote network device. This generally means that there is an authentication issue. See the "Authentication and connection issues" section
in this document for more information.
.. _enable_network_logging:
Enabling Networking logging and how to read the logfile
-------------------------------------------------------
**Platforms:** Any
Ansible includes logging to help diagnose and troubleshoot issues regarding Ansible Networking modules.
Because logging is very verbose, it is disabled by default. It can be enabled with the :envvar:`ANSIBLE_LOG_PATH` and :envvar:`ANSIBLE_DEBUG` options on the ansible-controller, that is the machine running ``ansible-playbook``.
Before running ``ansible-playbook``, run the following commands to enable logging:
.. code:: shell
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with 4*v for connection level verbosity
ansible-playbook -vvvv ...
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller:
.. code::
less $ANSIBLE_LOG_PATH
2017-03-30 13:19:52,740 p=28990 u=fred | creating new control socket for host veos01:22 as user admin
2017-03-30 13:19:52,741 p=28990 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-03-30 13:19:52,741 p=28990 u=fred | current working directory is /home/fred/ansible/test/integration
2017-03-30 13:19:52,741 p=28990 u=fred | using connection plugin network_cli
...
2017-03-30 13:20:14,771 paramiko.transport userauth is OK
2017-03-30 13:20:15,283 paramiko.transport Authentication (keyboard-interactive) successful!
2017-03-30 13:20:15,302 p=28990 u=fred | ssh connection done, setting terminal
2017-03-30 13:20:15,321 p=28990 u=fred | ssh connection has completed successfully
2017-03-30 13:20:15,322 p=28990 u=fred | connection established to veos01 in 0:00:22.580626
From the log notice:
* ``p=28990`` Is the PID (Process ID) of the ``ansible-connection`` process
* ``u=fred`` Is the user `running` ansible, not the remote-user you are attempting to connect as
* ``creating new control socket for host veos01:22 as user admin`` host:port as user
* ``control socket path is`` location on disk where the persistent connection socket is created
* ``using connection plugin network_cli`` Informs you that persistent connection is being used
* ``connection established to veos01 in 0:00:22.580626`` Time taken to obtain a shell on the remote device
.. note:: Port None ``creating new control socket for host veos01:None``
If the log reports the port as ``None`` this means that the default port is being used.
A future Ansible release will improve this message so that the port is always logged.
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries::
grep "p=28990" $ANSIBLE_LOG_PATH
Enabling Networking device interaction logging
----------------------------------------------
**Platforms:** Any
Ansible includes logging of device interaction in the log file to help diagnose and troubleshoot
issues regarding Ansible Networking modules. The messages are logged in the file pointed to by the ``log_path`` configuration
option in the Ansible configuration file or by setting the :envvar:`ANSIBLE_LOG_PATH`.
.. warning::
The device interaction messages consist of command executed on the target device and the returned response. Since this
log data can contain sensitive information including passwords in plain text it is disabled by default.
Additionally, in order to prevent accidental leakage of data, a warning will be shown on every task with this
setting enabled, specifying which host has it enabled and where the data is being logged.
Be sure to fully understand the security implications of enabling this option. The device interaction logging can be enabled either globally by setting in configuration file or by setting environment or enabled on per task basis by passing a special variable to the task.
Before running ``ansible-playbook`` run the following commands to enable logging:
.. code-block:: text
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
Enable device interaction logging for a given task
.. code-block:: yaml
- name: get version information
cisco.ios.ios_command:
commands:
- show version
vars:
ansible_persistent_log_messages: True
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
log_messages = True
or enable the environment variable `ANSIBLE_PERSISTENT_LOG_MESSAGES`:
.. code-block:: text
# Enable device interaction logging
export ANSIBLE_PERSISTENT_LOG_MESSAGES=True
If the task is failing on connection initialization itself, you should enable this option
globally. If an individual task is failing intermittently this option can be enabled for that task itself to find the root cause.
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller
.. note:: Be sure to fully understand the security implications of enabling this option as it can log sensitive
information in log file thus creating security vulnerability.
Isolating an error
------------------
**Platforms:** Any
As with any effort to troubleshoot it's important to simplify the test case as much as possible.
For Ansible this can be done by ensuring you are only running against one remote device:
* Using ``ansible-playbook --limit switch1.example.net...``
* Using an ad hoc ``ansible`` command
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device::
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
In the above example, we:
* connect to ``switch1.example.net`` specified in the inventory file ``inventory``
* use the module ``arista.eos.eos_command``
* run the command ``?``
* connect using the username ``admin``
* inform the ``ansible`` command to prompt for the SSH password by specifying ``-k``
If you have SSH keys configured correctly, you don't need to specify the ``-k`` parameter.
If the connection still fails you can combine it with the enable_network_logging parameter. For example:
.. code-block:: text
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with ``-vvvv`` for connection level verbosity
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
Then review the log file and find the relevant error message in the rest of this document.
.. For details on other ways to authenticate, see LINKTOAUTHHOWTODOCS.
.. _socket_path_issue:
Troubleshooting socket path issues
==================================
**Platforms:** Any
The ``Socket path does not exist or cannot be found`` and ``Unable to connect to socket`` messages indicate that the socket used to communicate with the remote network device is unavailable or does not exist.
For example:
.. code-block:: none
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 115, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Socket path XX does not exist or cannot be found. See Troubleshooting socket path issues in the Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
or
.. code-block:: none
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 123, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Unable to connect to socket XX. See Troubleshooting socket path issues in Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
Suggestions to resolve:
#. Verify that you have write access to the socket path described in the error message.
#. Follow the steps detailed in :ref:`enable network logging <enable_network_logging>`.
If the identified error message from the log file is:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
or
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
Follow the steps detailed in :ref:`timeout issues <timeout_issues>`
.. _unable_to_open_shell:
Category "Unable to open shell"
===============================
**Platforms:** Any
The ``unable to open shell`` message means that the ``ansible-connection`` daemon has not been able to successfully talk to the remote network device. This generally means that there is an authentication issue. It is a "catch all" message, meaning you need to enable :ref:`logging <a_note_about_logging>` to find the underlying issues.
For example:
.. code-block:: none
TASK [prepare_eos_tests : enable cli on remote device] **************************************************
fatal: [veos01]: FAILED! => {"changed": false, "failed": true, "msg": "unable to open shell"}
or:
.. code-block:: none
TASK [ios_system : configure name_servers] *************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell",
}
Suggestions to resolve:
Follow the steps detailed in enable_network_logging_.
Once you've identified the error message from the log file, the specific solution can be found in the rest of this document.
Error: "[Errno -2] Name or service not known"
---------------------------------------------
**Platforms:** Any
Indicates that the remote host you are trying to connect to can not be reached
For example:
.. code-block:: yaml
2017-04-04 11:39:48,147 p=15299 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 11:39:48,147 p=15299 u=fred | current working directory is /home/fred/git/ansible-inc/stable-2.3/test/integration
2017-04-04 11:39:48,147 p=15299 u=fred | using connection plugin network_cli
2017-04-04 11:39:48,340 p=15299 u=fred | connecting to host veos01 returned an error
2017-04-04 11:39:48,340 p=15299 u=fred | [Errno -2] Name or service not known
Suggestions to resolve:
* If you are using the ``provider:`` options ensure that its suboption ``host:`` is set correctly.
* If you are not using ``provider:`` nor top-level arguments ensure your inventory file is correct.
Error: "Authentication failed"
------------------------------
**Platforms:** Any
Occurs if the credentials (username, passwords, or ssh keys) passed to ``ansible-connection`` (through ``ansible`` or ``ansible-playbook``) can not be used to connect to the remote device.
For example:
.. code-block:: yaml
<ios01> ESTABLISH CONNECTION FOR USER: cisco on PORT 22 TO ios01
<ios01> Authentication failed.
Suggestions to resolve:
If you are specifying credentials through ``password:`` (either directly or through ``provider:``) or the environment variable `ANSIBLE_NET_PASSWORD` it is possible that ``paramiko`` (the Python SSH library that Ansible uses) is using ssh keys, and therefore the credentials you are specifying are being ignored. To find out if this is the case, disable "look for keys". This can be done like this:
.. code-block:: yaml
export ANSIBLE_PARAMIKO_LOOK_FOR_KEYS=False
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[paramiko_connection]
look_for_keys = False
Error: "connecting to host <hostname> returned an error" or "Bad address"
-------------------------------------------------------------------------
This may occur if the SSH fingerprint hasn't been added to Paramiko's (the Python SSH library) know hosts file.
When using persistent connections with Paramiko, the connection runs in a background process. If the host doesn't already have a valid SSH key, by default Ansible will prompt to add the host key. This will cause connections running in background processes to fail.
For example:
.. code-block:: yaml
2017-04-04 12:06:03,486 p=17981 u=fred | using connection plugin network_cli
2017-04-04 12:06:04,680 p=17981 u=fred | connecting to host veos01 returned an error
2017-04-04 12:06:04,682 p=17981 u=fred | (14, 'Bad address')
2017-04-04 12:06:33,519 p=17981 u=fred | number of connection attempts exceeded, unable to connect to control socket
2017-04-04 12:06:33,520 p=17981 u=fred | persistent_connect_interval=1, persistent_connect_retries=30
Suggestions to resolve:
Use ``ssh-keyscan`` to pre-populate the known_hosts. You need to ensure the keys are correct.
.. code-block:: shell
ssh-keyscan veos01
or
You can tell Ansible to automatically accept the keys
Environment variable method:
.. code-block:: shell
export ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD=True
ansible-playbook ...
``ansible.cfg`` method:
ansible.cfg
.. code-block:: ini
[paramiko_connection]
host_key_auto_add = True
.. warning: Security warning
Care should be taken before accepting keys.
Error: "No authentication methods available"
--------------------------------------------
For example:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | creating new control socket for host veos01:None as user admin
2017-04-04 12:19:05,670 p=18591 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 12:19:05,670 p=18591 u=fred | current working directory is /home/fred/git/ansible-inc/ansible-workspace-2/test/integration
2017-04-04 12:19:05,670 p=18591 u=fred | using connection plugin network_cli
2017-04-04 12:19:06,606 p=18591 u=fred | connecting to host veos01 returned an error
2017-04-04 12:19:06,606 p=18591 u=fred | No authentication methods available
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
Suggestions to resolve:
No password or SSH key supplied
Clearing Out Persistent Connections
-----------------------------------
**Platforms:** Any
In Ansible 2.3, persistent connection sockets are stored in ``~/.ansible/pc`` for all network devices. When an Ansible playbook runs, the persistent socket connection is displayed when verbose output is specified.
``<switch> socket_path: /home/fred/.ansible/pc/f64ddfa760``
To clear out a persistent connection before it times out (the default timeout is 30 seconds
of inactivity), simple delete the socket file.
.. _timeout_issues:
Timeout issues
==============
Persistent connection idle timeout
----------------------------------
By default, ``ANSIBLE_PERSISTENT_CONNECT_TIMEOUT`` is set to 30 (seconds). You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
Suggestions to resolve:
Increase value of persistent connection idle timeout:
.. code-block:: sh
export ANSIBLE_PERSISTENT_CONNECT_TIMEOUT=60
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
connect_timeout = 60
Command timeout
---------------
By default, ``ANSIBLE_PERSISTENT_COMMAND_TIMEOUT`` is set to 30 (seconds). Prior versions of Ansible had this value set to 10 seconds by default.
You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
Suggestions to resolve:
* Option 1 (Global command timeout setting):
Increase value of command timeout in configuration file or by setting environment variable.
.. code-block:: yaml
export ANSIBLE_PERSISTENT_COMMAND_TIMEOUT=60
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
command_timeout = 60
* Option 2 (Per task command timeout setting):
Increase command timeout per task basis. All network modules support a
timeout value that can be set on a per task basis.
The timeout value controls the amount of time in seconds before the
task will fail if the command has not returned.
For local connection type:
.. FIXME: Detail error here
Suggestions to resolve:
.. code-block:: yaml
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
provider: "{{ cli }}"
timeout: 30
Suggestions to resolve:
.. code-block:: yaml
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
vars:
ansible_command_timeout: 60
Some operations take longer than the default 30 seconds to complete. One good
example is saving the current running config on IOS devices to startup config.
In this case, changing the timeout value from the default 30 seconds to 60
seconds will prevent the task from failing before the command completes
successfully.
Persistent connection retry timeout
-----------------------------------
By default, ``ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT`` is set to 15 (seconds). You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
Suggestions to resolve:
Increase the value of the persistent connection idle timeout.
Note: This value should be greater than the SSH timeout value (the timeout value under the defaults
section in the configuration file) and less than the value of the persistent
connection idle timeout (connect_timeout).
.. code-block:: yaml
export ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT=30
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
connect_retry_timeout = 30
Timeout issue due to platform specific login menu with ``network_cli`` connection type
--------------------------------------------------------------------------------------
In Ansible 2.9 and later, the network_cli connection plugin configuration options are added
to handle the platform specific login menu. These options can be set as group/host or tasks
variables.
Example: Handle single login menu prompts with host variables
.. code-block:: console
$cat host_vars/<hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Connect to a host"
ansible_terminal_initial_answer:
- "3"
Example: Handle remote host multiple login menu prompts with host variables
.. code-block:: console
$cat host_vars/<inventory-hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Press any key to enter main menu"
- "Connect to a host"
ansible_terminal_initial_answer:
- "\\r"
- "3"
ansible_terminal_initial_prompt_checkall: True
To handle multiple login menu prompts:
* The values of ``ansible_terminal_initial_prompt`` and ``ansible_terminal_initial_answer`` should be a list.
* The prompt sequence should match the answer sequence.
* The value of ``ansible_terminal_initial_prompt_checkall`` should be set to ``True``.
.. note:: If all the prompts in sequence are not received from remote host at the time connection initialization it will result in a timeout.
Playbook issues
===============
This section details issues are caused by issues with the Playbook itself.
Error: "Unable to enter configuration mode"
-------------------------------------------
**Platforms:** Arista EOS and Cisco IOS
This occurs when you attempt to run a task that requires privileged mode in a user mode shell.
For example:
.. code-block:: console
TASK [ios_system : configure name_servers] *****************************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to enter configuration mode",
}
Suggestions to resolve:
Use ``connection: ansible.netcommon.network_cli`` and ``become: yes``
Proxy Issues
============
.. _network_delegate_to_vs_ProxyCommand:
delegate_to vs ProxyCommand
---------------------------
In order to use a bastion or intermediate jump host to connect to network devices over ``cli``
transport, network modules support the use of ``ProxyCommand``.
To use ``ProxyCommand``, configure the proxy settings in the Ansible inventory
file to specify the proxy host.
.. code-block:: ini
[nxos]
nxos01
nxos02
[nxos:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
With the configuration above, simply build and run the playbook as normal with
no additional changes necessary. The network module will now connect to the
network device by first connecting to the host specified in
``ansible_ssh_common_args``, which is ``bastion01`` in the above example.
You can also set the proxy target for all hosts by using environment variables.
.. code-block:: sh
export ANSIBLE_SSH_ARGS='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
Using bastion/jump host with netconf connection
-----------------------------------------------
Enabling jump host setting
--------------------------
Bastion/jump host with netconf connection can be enabled by:
- Setting Ansible variable ``ansible_netconf_ssh_config`` either to ``True`` or custom ssh config file path
- Setting environment variable ``ANSIBLE_NETCONF_SSH_CONFIG`` to ``True`` or custom ssh config file path
- Setting ``ssh_config = 1`` or ``ssh_config = <ssh-file-path>`` under ``netconf_connection`` section
If the configuration variable is set to 1 the proxycommand and other ssh variables are read from
default ssh config file (~/.ssh/config).
If the configuration variable is set to file path the proxycommand and other ssh variables are read
from the given custom ssh file path
Example ssh config file (~/.ssh/config)
---------------------------------------
.. code-block:: ini
Host jumphost
HostName jumphost.domain.name.com
User jumphost-user
IdentityFile "/path/to/ssh-key.pem"
Port 22
# Note: Due to the way that Paramiko reads the SSH Config file,
# you need to specify the NETCONF port that the host uses.
# In other words, it does not automatically use ansible_port
# As a result you need either:
Host junos01
HostName junos01
ProxyCommand ssh -W %h:22 jumphost
# OR
Host junos01
HostName junos01
ProxyCommand ssh -W %h:830 jumphost
# Depending on the netconf port used.
Example Ansible inventory file
.. code-block:: ini
[junos]
junos01
[junos:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault...
.. note:: Using ``ProxyCommand`` with passwords through variables
By design, SSH doesn't support providing passwords through environment variables.
This is done to prevent secrets from leaking out, for example in ``ps`` output.
We recommend using SSH Keys, and if needed an ssh-agent, rather than passwords, where ever possible.
Miscellaneous Issues
====================
Intermittent failure while using ``ansible.netcommon.network_cli`` connection type
------------------------------------------------------------------------------------
If the command prompt received in response is not matched correctly within
the ``ansible.netcommon.network_cli`` connection plugin the task might fail intermittently with truncated
response or with the error message ``operation requires privilege escalation``.
Starting in 2.7.1 a new buffer read timer is added to ensure prompts are matched properly
and a complete response is send in output. The timer default value is 0.2 seconds and
can be adjusted on a per task basis or can be set globally in seconds.
Example Per task timer setting
.. code-block:: yaml
- name: gather ios facts
cisco.ios.ios_facts:
gather_subset: all
register: result
vars:
ansible_buffer_read_timeout: 2
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
buffer_read_timeout = 2
This timer delay per command executed on remote host can be disabled by setting the value to zero.
Task failure due to mismatched error regex within command response using ``ansible.netcommon.network_cli`` connection type
----------------------------------------------------------------------------------------------------------------------------
In Ansible 2.9 and later, the ``ansible.netcommon.network_cli`` connection plugin configuration options are added
to handle the stdout and stderr regex to identify if the command execution response consist
of a normal response or an error response. These options can be set group/host variables or as
tasks variables.
Example: For mismatched error response
.. code-block:: yaml
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
Playbook run output:
.. code-block:: console
TASK [first fetch logs] ********************************************************
fatal: [ios01]: FAILED! => {
"changed": false,
"msg": "RF Name:\r\n\r\n <--nsip-->
\"IPSEC-3-REPLAY_ERROR: Test log\"\r\n*Aug 1 08:36:18.483: %SYS-7-USERLOG_DEBUG:
Message from tty578(user id: ansible): test\r\nan-ios-02#"}
Suggestions to resolve:
Modify the error regex for individual task.
.. code-block:: yaml
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
vars:
ansible_terminal_stderr_re:
- pattern: 'connection timed out'
flags: 're.I'
The terminal plugin regex options ``ansible_terminal_stderr_re`` and ``ansible_terminal_stdout_re`` have
``pattern`` and ``flags`` as keys. The value of the ``flags`` key should be a value that is accepted by
the ``re.compile`` python method.
Intermittent failure while using ``ansible.netcommon.network_cli`` connection type due to slower network or remote target host
----------------------------------------------------------------------------------------------------------------------------------
In Ansible 2.9 and later, the ``ansible.netcommon.network_cli`` connection plugin configuration option is added to control
the number of attempts to connect to a remote host. The default number of attempts is three.
After every retry attempt the delay between retries is increased by power of 2 in seconds until either the
maximum attempts are exhausted or either the ``persistent_command_timeout`` or ``persistent_connect_timeout`` timers are triggered.
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
network_cli_retries = 5
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,028 |
Docs: Add code-block wrappers to code examples in guide_alicloud.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `guide_alicloud.rst` file in the Scenario Guides (`docs/docsite/rst/scenario_guides`), there are 2 instances where a lead-in sentence ends with `::`. Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/guide_alicloud.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79028
|
https://github.com/ansible/ansible/pull/79037
|
57f22529cbb9bbcb56ae3e8d597fb508dec409a1
|
63b5fc4b8dfc22bf3fde03fe14c2b09353845e65
| 2022-10-05T10:41:06Z |
python
| 2022-10-05T12:39:06Z |
docs/docsite/rst/scenario_guides/guide_alicloud.rst
|
Alibaba Cloud Compute Services Guide
====================================
.. _alicloud_intro:
Introduction
````````````
Ansible contains several modules for controlling and managing Alibaba Cloud Compute Services (Alicloud). This guide
explains how to use the Alicloud Ansible modules together.
All Alicloud modules require ``footmark`` - install it on your control machine with ``pip install footmark``.
Cloud modules, including Alicloud modules, execute on your local machine (the control machine) with ``connection: local``, rather than on remote machines defined in your hosts.
Normally, you'll use the following pattern for plays that provision Alicloud resources::
- hosts: localhost
connection: local
vars:
- ...
tasks:
- ...
.. _alicloud_authentication:
Authentication
``````````````
You can specify your Alicloud authentication credentials (access key and secret key) by passing them as
environment variables or by storing them in a vars file.
To pass authentication credentials as environment variables::
export ALICLOUD_ACCESS_KEY='Alicloud123'
export ALICLOUD_SECRET_KEY='AlicloudSecret123'
To store authentication credentials in a vars_files, encrypt them with :ref:`Ansible Vault<vault>` to keep them secure, then list them:
.. code-block:: yaml
---
alicloud_access_key: "--REMOVED--"
alicloud_secret_key: "--REMOVED--"
Note that if you store your credentials in a vars_files, you need to refer to them in each Alicloud module. For example:
.. code-block:: yaml
- ali_instance:
alicloud_access_key: "{{alicloud_access_key}}"
alicloud_secret_key: "{{alicloud_secret_key}}"
image_id: "..."
.. _alicloud_provisioning:
Provisioning
````````````
Alicloud modules create Alicloud ECS instances, disks, virtual private clouds, virtual switches, security groups and other resources.
You can use the ``count`` parameter to control the number of resources you create or terminate. For example, if you want exactly 5 instances tagged ``NewECS``,
set the ``count`` of instances to 5 and the ``count_tag`` to ``NewECS``, as shown in the last task of the example playbook below.
If there are no instances with the tag ``NewECS``, the task creates 5 new instances. If there are 2 instances with that tag, the task
creates 3 more. If there are 8 instances with that tag, the task terminates 3 of those instances.
If you do not specify a ``count_tag``, the task creates the number of instances you specify in ``count`` with the ``instance_name`` you provide.
::
# alicloud_setup.yml
- hosts: localhost
connection: local
tasks:
- name: Create VPC
ali_vpc:
cidr_block: '{{ cidr_block }}'
vpc_name: new_vpc
register: created_vpc
- name: Create VSwitch
ali_vswitch:
alicloud_zone: '{{ alicloud_zone }}'
cidr_block: '{{ vsw_cidr }}'
vswitch_name: new_vswitch
vpc_id: '{{ created_vpc.vpc.id }}'
register: created_vsw
- name: Create security group
ali_security_group:
name: new_group
vpc_id: '{{ created_vpc.vpc.id }}'
rules:
- proto: tcp
port_range: 22/22
cidr_ip: 0.0.0.0/0
priority: 1
rules_egress:
- proto: tcp
port_range: 80/80
cidr_ip: 192.168.0.54/32
priority: 1
register: created_group
- name: Create a set of instances
ali_instance:
security_groups: '{{ created_group.group_id }}'
instance_type: ecs.n4.small
image_id: "{{ ami_id }}"
instance_name: "My-new-instance"
instance_tags:
Name: NewECS
Version: 0.0.1
count: 5
count_tag:
Name: NewECS
allocate_public_ip: true
max_bandwidth_out: 50
vswitch_id: '{{ created_vsw.vswitch.id}}'
register: create_instance
In the example playbook above, data about the vpc, vswitch, group, and instances created by this playbook
are saved in the variables defined by the "register" keyword in each task.
Each Alicloud module offers a variety of parameter options. Not all options are demonstrated in the above example.
See each individual module for further details and examples.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,038 |
Docs: Add code-block wrappers to code examples in general_precedence.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `general_precedence.rst ` file in the `docs/reference_appendices/` directory, there are 5 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep general_precedence.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/general_precedence.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79038
|
https://github.com/ansible/ansible/pull/79042
|
63b5fc4b8dfc22bf3fde03fe14c2b09353845e65
|
3a788314a2362e548a9bbe47b48456a9a0f1364f
| 2022-10-05T12:16:41Z |
python
| 2022-10-05T15:41:34Z |
docs/docsite/rst/reference_appendices/general_precedence.rst
|
.. _general_precedence_rules:
Controlling how Ansible behaves: precedence rules
=================================================
To give you maximum flexibility in managing your environments, Ansible offers many ways to control how Ansible behaves: how it connects to managed nodes, how it works once it has connected.
If you use Ansible to manage a large number of servers, network devices, and cloud resources, you may define Ansible behavior in several different places and pass that information to Ansible in several different ways.
This flexibility is convenient, but it can backfire if you do not understand the precedence rules.
These precedence rules apply to any setting that can be defined in multiple ways (by configuration settings, command-line options, playbook keywords, variables).
.. contents::
:local:
Precedence categories
---------------------
Ansible offers four sources for controlling its behavior. In order of precedence from lowest (most easily overridden) to highest (overrides all others), the categories are:
* Configuration settings
* Command-line options
* Playbook keywords
* Variables
Each category overrides any information from all lower-precedence categories. For example, a playbook keyword will override any configuration setting.
Within each precedence category, specific rules apply. However, generally speaking, 'last defined' wins and overrides any previous definitions.
Configuration settings
^^^^^^^^^^^^^^^^^^^^^^
:ref:`Configuration settings<ansible_configuration_settings>` include both values from the ``ansible.cfg`` file and environment variables. Within this category, values set in configuration files have lower precedence. Ansible uses the first ``ansible.cfg`` file it finds, ignoring all others. Ansible searches for ``ansible.cfg`` in these locations in order:
* ``ANSIBLE_CONFIG`` (environment variable if set)
* ``ansible.cfg`` (in the current directory)
* ``~/.ansible.cfg`` (in the home directory)
* ``/etc/ansible/ansible.cfg``
Environment variables have a higher precedence than entries in ``ansible.cfg``. If you have environment variables set on your control node, they override the settings in whichever ``ansible.cfg`` file Ansible loads. The value of any given environment variable follows normal shell precedence: the last value defined overwrites previous values.
Command-line options
^^^^^^^^^^^^^^^^^^^^
Any command-line option will override any configuration setting.
When you type something directly at the command line, you may feel that your hand-crafted values should override all others, but Ansible does not work that way. Command-line options have low precedence - they override configuration only. They do not override playbook keywords, variables from inventory or variables from playbooks.
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``::
ansible -u mike -m ping myhost -u carol
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2::
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
The help for each :ref:`command-line tool<command_line_tools>` lists available options for that tool.
Playbook keywords
^^^^^^^^^^^^^^^^^
Any :ref:`playbook keyword<playbook_keywords>` will override any command-line option and any configuration setting.
Within playbook keywords, precedence flows with the playbook itself; the more specific wins against the more general:
- play (most general)
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- tasks (most specific)
A simple example::
- hosts: all
connection: ssh
tasks:
- name: This task uses ssh.
ping:
- name: This task uses paramiko.
connection: paramiko
ping:
In this example, the ``connection`` keyword is set to ``ssh`` at the play level. The first task inherits that value, and connects using ``ssh``. The second task inherits that value, overrides it, and connects using ``paramiko``.
The same logic applies to blocks and roles as well. All tasks, blocks, and roles within a play inherit play-level keywords; any task, block, or role can override any keyword by defining a different value for that keyword within the task, block, or role.
Remember that these are KEYWORDS, not variables. Both playbooks and variable files are defined in YAML but they have different significance.
Playbooks are the command or 'state description' structure for Ansible, variables are data we use to help make playbooks more dynamic.
.. _general_precedence_variables:
Variables
^^^^^^^^^
Any variable will override any playbook keyword, any command-line option, and any configuration setting.
Variables that have equivalent playbook keywords, command-line options, and configuration settings are known as :ref:`connection_variables`. Originally designed for connection parameters, this category has expanded to include other core variables like the temporary directory and the python interpreter.
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role::
- hosts: cloud
gather_facts: false
become: true
vars:
ansible_become_user: admin
tasks:
- name: This task uses admin as the become user.
dnf:
name: some-service
state: latest
- block:
- name: This task uses service-admin as the become user.
# a task to configure the new service
- name: This task also uses service-admin as the become user, defined in the block.
# second task to configure the service
vars:
ansible_become_user: service-admin
- name: This task (outside of the block) uses admin as the become user again.
service:
name: some-service
state: restarted
Variable scope: how long is a value available?
""""""""""""""""""""""""""""""""""""""""""""""
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available through the ``hostvars[]`` dictionary.
.. _general_precedence_extra_vars:
Using ``-e`` extra variables at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``::
ansible -u carol -e 'ansible_user=brian' -a whoami all
You must specify both the variable name and the value with ``--extra-vars``.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,036 |
Docs: Add code-block wrappers to code examples in YAMLSyntax.rst
|
### Note: This issue has been assigned to shade34321.
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
In the `YAMLSyntax.rst` file in the `docs/reference_appendices/` directory, there are 19 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep YAMLSyntax.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/YAMLSyntax.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79036
|
https://github.com/ansible/ansible/pull/79040
|
3fc337146383451d9348f00bf8abbb0dd1aa2dab
|
25a770de37f6ad4099dff490a44b6ec70db3d4b4
| 2022-10-05T11:47:04Z |
python
| 2022-10-05T16:28:53Z |
docs/docsite/rst/reference_appendices/YAMLSyntax.rst
|
.. _yaml_syntax:
YAML Syntax
===========
This page provides a basic overview of correct YAML syntax, which is how Ansible
playbooks (our configuration management language) are expressed.
We use YAML because it is easier for humans to read and write than other common
data formats like XML or JSON. Further, there are libraries available in most
programming languages for working with YAML.
You may also wish to read :ref:`working_with_playbooks` at the same time to see how this
is used in practice.
YAML Basics
-----------
For Ansible, nearly every YAML file starts with a list.
Each item in the list is a list of key/value pairs, commonly
called a "hash" or a "dictionary". So, we need to know how
to write lists and dictionaries in YAML.
There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space)::
---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango
...
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
# An employee record
martin:
name: Martin D'vloper
job: Developer
skill: Elite
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both::
# Employee records
- martin:
name: Martin D'vloper
job: Developer
skills:
- python
- perl
- pascal
- tabitha:
name: Tabitha Bitumen
job: Developer
skills:
- lisp
- fortran
- erlang
Dictionaries and lists can also be represented in an abbreviated form if you really want to::
---
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
fruits: ['Apple', 'Orange', 'Strawberry', 'Mango']
These are called "Flow collections".
.. _truthiness:
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms::
create_key: true
needs_agent: false
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Use lowercase 'true' or 'false' for boolean values in dictionaries if you want to be compatible with default yamllint options.
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a "Literal Block Scalar" ``|`` will include the newlines and any trailing spaces.
Using a "Folded Block Scalar" ``>`` will fold newlines to spaces; it's used to make what would otherwise be a very long line easier to read and edit.
In either case the indentation will be ignored.
Examples are::
include_newlines: |
exactly as you see
will appear these three
lines of poetry
fold_newlines: >
this is really a
single line of text
despite appearances
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept::
fold_some_newlines: >
a
b
c
d
e
f
Alternatively, it can be enforced by including newline ``\n`` characters::
fold_same_newlines: "a b\nc d\n e\nf\n"
Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
---
# An employee record
name: Martin D'vloper
job: Developer
skill: Elite
employed: True
foods:
- Apple
- Orange
- Strawberry
- Mango
languages:
perl: Elite
python: Elite
pascal: Lame
education: |
4 GCSEs
3 A-Levels
BSc in the Internet of Things
That's all you really need to know about YAML to start writing `Ansible` playbooks.
Gotchas
-------
While you can put just about anything into an unquoted scalar, there are some exceptions.
A colon followed by a space (or newline) ``": "`` is an indicator for a mapping.
A space followed by the pound sign ``" #"`` starts a comment.
Because of this, the following is going to result in a YAML syntax error::
foo: somebody said I should put a colon here: so I did
windows_drive: c:
...but this will work::
windows_path: c:\windows
You will want to quote hash values using colons followed by a space or the end of the line::
foo: 'somebody said I should put a colon here: so I did'
windows_drive: 'c:'
...and then the colon will be preserved.
Alternatively, you can use double quotes::
foo: "somebody said I should put a colon here: so I did"
windows_drive: "c:"
The difference between single quotes and double quotes is that in double quotes
you can use escapes::
foo: "a \t TAB and a \n NEWLINE"
The list of allowed escapes can be found in the YAML Specification under "Escape Sequences" (YAML 1.1) or "Escape Characters" (YAML 1.2).
The following is invalid YAML:
.. code-block:: text
foo: "an escaped \' single quote"
Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts
with a "{", YAML will think it is a dictionary, so you must quote it, like so::
foo: "{{ variable }}"
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things::
foo: "{{ variable }}/additional/string/literal"
foo2: "{{ variable }}\\backslashes\\are\\also\\special\\characters"
foo3: "even if it's just a string literal it must all be quoted"
Not valid::
foo: "E:\\path\\"rest\\of\\path
In addition to ``'`` and ``"`` there are a number of characters that are special (or reserved) and cannot be used
as the first character of an unquoted scalar: ``[] {} > | * & ! % # ` @ ,``.
You should also be aware of ``? : -``. In YAML, they are allowed at the beginning of a string if a non-space
character follows, but YAML processor implementations differ, so it's better to use quotes.
In Flow Collections, the rules are a bit more strict::
a scalar in block mapping: this } is [ all , valid
flow mapping: { key: "you { should [ use , quotes here" }
Boolean conversion is helpful, but this can be a problem when you want a literal `yes` or other boolean values as a string.
In these cases just use quotes::
non_boolean: "yes"
other_string: "False"
YAML converts certain strings into floating-point values, such as the string
`1.0`. If you need to specify a version number (in a requirements.yml file, for
example), you will need to quote the value if it looks like a floating-point
value::
version: "1.0"
.. seealso::
:ref:`working_with_playbooks`
Learn what playbooks can do and how to write/run them.
`YAMLLint <http://yamllint.com/>`_
YAML Lint (online) helps you debug YAML syntax if you are having problems
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the github project source
`Wikipedia YAML syntax reference <https://en.wikipedia.org/wiki/YAML>`_
A good guide to YAML syntax
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels (join #yaml for yaml-specific questions)
`YAML 1.1 Specification <https://yaml.org/spec/1.1/>`_
The Specification for YAML 1.1, which PyYAML and libyaml are currently
implementing
`YAML 1.2 Specification <https://yaml.org/spec/1.2/spec.html>`_
For completeness, YAML 1.2 is the successor of 1.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,039 |
Docs: Add code-block wrappers to code examples in faq.rst
|
### This issue has been assigned to Shellylo
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `faq.rst ` file in the `docs/reference_appendices/` directory, there are 15 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep faq.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files) _
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/faq.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79039
|
https://github.com/ansible/ansible/pull/79047
|
25a770de37f6ad4099dff490a44b6ec70db3d4b4
|
35700f57cc62a15d7f04a82eaae2193c65fb8570
| 2022-10-05T12:27:22Z |
python
| 2022-10-05T16:46:14Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _collections_transition:
Where did all the modules go?
+++++++++++++++++++++++++++++
In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10).
* To learn more about using collections, see :ref:`collections`.
* To learn more about developing collections, see :ref:`developing_collections`.
* To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections.
.. _find_my_module:
Where did this specific module go?
++++++++++++++++++++++++++++++++++
IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`.
.. _slow_install:
How can I speed up Ansible on systems with slow disks?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible may feel sluggish on systems with slow disks, such as Raspberry PI. See `Ansible might be running slow if libyaml is not available <https://www.jeffgeerling.com/blog/2021/ansible-might-be-running-slow-if-libyaml-not-available>`_ for hints on how to improve this.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire play?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play.
.. code-block:: yaml
shell:
cmd: date
environment:
LANG=fr_FR.UTF-8
.. code-block:: yaml
hosts: servers
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a ``ProxyCommand`` in the
``ansible_ssh_common_args`` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group ``gatewayed``. (These arguments are used
in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to
repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.)
Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute ``nc %h:%p`` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``,
or globally by setting ``ssh_args`` in ``ansible.cfg``.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option,
SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval``
into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that
``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _cloud_provider_performance:
How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of machines of a cloud provider from your laptop.
Rather connect to a management node inside this cloud provider first and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter through
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python through
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using
the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available through pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on macOS
----------------
When executing Ansible on a system with macOS as a controller machine one might encounter the following error:
.. error::
+[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
ERROR! A worker was found in a dead state
In general the recommended workaround is to set the following environment variable in your shell:
.. code-block:: shell
$ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set::
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode through sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
Running under fakeroot
----------------------
Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default.
It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`).
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or through environment variable) you can set::
remote_tmp=$HOME/.ansible/tmp
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1``
in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module
as an ad hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe
the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question
if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
through a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname``
is a magic variable that indicates the current host you are looping over in the host loop.
In the example above, if your interface names have dashes, you must replace them with underscores:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }}
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables,
they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
- set_fact: headnode={{ groups['webservers'][0] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient
for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available through facts in the ``ansible_env`` variable:
.. code-block:: jinja
{{ ansible_env.HOME }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment`
in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the
:ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>`
modules to introduce environment variables into files. The exact files to edit vary depending on your OS
and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The ``mkpasswd`` utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (for example, you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called ``encrypt (1)``
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
.. code-block:: jinja
item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries
item['update'] # this works
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence
found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take
precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding
with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services
and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our
`webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / GUI?
++++++++++++++++++++++++++++++++++++++++++++
Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`.
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play::
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself through
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without
specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' ~ other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' ~ other_var) }}
To determine if a keyword requires ``{{ }}`` or even supports templating, use ``ansible-doc -t keyword <name>``,
this will return documentation on the keyword including a ``template`` field with the values ``explicit`` (requires ``{{ }}``),
``implicit`` (assumes ``{{ }}``, so no needed) or ``static`` (no templating supported, all characters will be interpreted literally)
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original through ``hostvars``::
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _mfa_support:
Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction.
As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction.
Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also
tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or
a long set of tasks.
In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures.
With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution.
.. _complex_configuration_validation:
The 'validate' option is not enough for my needs, what do I do?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails.
This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools
for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature.
For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue
and backups, which most file based modules also support:
.. code-block:: yaml
- name: update config and backout if validation fails
block:
- name: do the actual update, works with copy, lineinfile and any action that allows for `backup`.
template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff
register: updated
- name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise.
shell: run_validation_commmand
become: true
become_user: requiredbyapp
environment:
WEIRD_REQUIREMENT: 1
rescue:
- name: restore backup file to original, in the hope the previous configuration was working.
copy:
remote_src: true
dest: /x/y/z
src: "{{ updated['backup_file'] }}"
always:
- name: We choose to always delete backup, but could copy or move, or only delete in rescue.
file:
path: "{{ updated['backup_file'] }}"
state: absent
.. _jinja2_faqs:
Why does the ``regex_search`` filter return `None` instead of an empty string?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Until the jinja2 2.10 release, Jinja was only able to return strings, but Ansible needed Python objects in some cases. Ansible uses ``safe_eval`` and only sends strings that look like certain types of Python objects through this function. With ``regex_search`` that does not find a match, the result (``None``) is converted to the string "None" which is not useful in non-native jinja2.
The following example of a single templating action shows this behavior:
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') }}
This example does not result in a Python ``None``, so Ansible historically converted it to "" (empty string).
The native jinja2 functionality actually allows us to return full Python objects, that are always represented as Python objects everywhere, and as such the result of a single templating action with ``regex_search`` can result in the Python ``None``.
.. note::
Native jinja2 functionality is not needed when ``regex_search`` is used as an intermediate result that is then compared to the jinja2 ``none`` test.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') is none }}
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Documentation for Ansible is kept in the main project git repository, and complete instructions
for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _legacy_vs_builtin:
What is the difference between ``ansible.legacy`` and ``ansible.builtin`` collections?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neither is a real collection. They are virtually constructed by the core engine (synthetic collections).
The ``ansible.builtin`` collection only refers to plugins that ship with ``ansible-core``.
The ``ansible.legacy`` collection is a superset of ``ansible.builtin`` (you can reference the plugins from builtin through ``ansible.legacy``). You also get the ability to
add 'custom' plugins in the :ref:`configured paths and adjacent directories <ansible_search_path>`, with the ability to override the builtin plugins that have the same name.
Also, ``ansible.legacy`` is what you get by default when you do not specify an FQCN.
So this:
.. code-block:: yaml
- shell: echo hi
Is really equivalent to:
.. code-block:: yaml
- ansible.legacy.shell: echo hi
Though, if you do not override the ``shell`` module, you can also just write it as ``ansible.builtin.shell``, since legacy will resolve to the builtin collection.
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,033 |
Docs: Add code-block wrappers to code examples in strategy.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `strategy.rst ` file in the `docs/docsite/rst/plugins/` directory, there is one instance of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep strategy.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
-
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/plugins/strategy.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79033
|
https://github.com/ansible/ansible/pull/79048
|
35700f57cc62a15d7f04a82eaae2193c65fb8570
|
680bf029b1c2430fab6988bc6cd8138d2d06a023
| 2022-10-05T11:29:20Z |
python
| 2022-10-05T16:55:41Z |
docs/docsite/rst/plugins/lookup.rst
|
.. _lookup_plugins:
Lookup plugins
==============
.. contents::
:local:
:depth: 2
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all :ref:`templating <playbooks_templating>`, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources. You can :ref:`create custom lookup plugins <developing_lookup_plugins>`.
.. note::
- Lookups are executed with a working directory relative to the role or play,
as opposed to local tasks, which are executed relative the executed script.
- Pass ``wantlist=True`` to lookups to use in Jinja2 template "for" loops.
- By default, lookup return values are marked as unsafe for security reasons. If you trust the outside source your lookup accesses, pass ``allow_unsafe=True`` to allow Jinja2 templates to evaluate lookup values.
.. warning::
- Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage.
.. _enabling_lookup:
Enabling lookup plugins
-----------------------
Ansible enables all lookup plugins it can find. You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside the ``plugins/lookup/`` directory of a collection you have installed, inside a standalone role, or in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
.. _using_lookup:
Using lookup plugins
--------------------
You can use lookup plugins anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template <template_module>` module. For more information on using lookup plugins, see :ref:`playbooks_lookups`.
.. code-block:: YAML+Jinja
vars:
file_contents: "{{ lookup('file', 'path/to/file.txt') }}"
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
tasks:
- name: count to 3
debug: msg={{ item }}
with_items: [1, 2, 3]
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example::
tasks:
- name: valid but useless and over complicated chained lookups and filters
debug: msg="find the answer here:\n{{ lookup('url', 'https://google.com/search/?q=' + item|urlencode)|join(' ') }}"
with_nested:
- "{{ lookup('consul_kv', 'bcs/' + lookup('file', '/the/question') + ', host=localhost, port=2000')|shuffle }}"
- "{{ lookup('sequence', 'end=42 start=2 step=2')|map('log', 4)|list) }}"
- ['a', 'c', 'd', 'c']
.. versionadded:: 2.6
You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
To ignore lookup errors::
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
ok: [localhost] => {
"msg": ""
}
To get a warning instead of a failure::
- name: if this file does not exist, let me know, but continue
debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
[WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile
ok: [localhost] => {
"msg": ""
}
To get a fatal error (the default)::
- name: if this file does not exist, FAIL (this is the default)
debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile"}
.. _query:
Forcing lookups to return lists: ``query`` and ``wantlist=True``
----------------------------------------------------------------
.. versionadded:: 2.5
In Ansible 2.5, a new Jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list.
The default behavior of ``lookup`` is to return a string of comma separated values. ``lookup`` can be explicitly configured to return a list using ``wantlist=True``.
This feature provides an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``.
The following examples are equivalent:
.. code-block:: jinja
lookup('dict', dict_variable, wantlist=True)
query('dict', dict_variable)
As demonstrated above, the behavior of ``wantlist=True`` is implicit when using ``query``.
Additionally, ``q`` was introduced as a shortform of ``query``:
.. code-block:: jinja
q('dict', dict_variable)
.. _lookup_plugins_list:
Plugin list
-----------
You can use ``ansible-doc -t lookup -l`` to see the list of available plugins. Use ``ansible-doc -t lookup <plugin name>`` to see specific documents and examples.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`callback_plugins`
Ansible callback plugins
:ref:`filter_plugins`
Jinja2 filter plugins
:ref:`test_plugins`
Jinja2 test plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,033 |
Docs: Add code-block wrappers to code examples in strategy.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `strategy.rst ` file in the `docs/docsite/rst/plugins/` directory, there is one instance of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep strategy.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
-
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/plugins/strategy.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79033
|
https://github.com/ansible/ansible/pull/79048
|
35700f57cc62a15d7f04a82eaae2193c65fb8570
|
680bf029b1c2430fab6988bc6cd8138d2d06a023
| 2022-10-05T11:29:20Z |
python
| 2022-10-05T16:55:41Z |
docs/docsite/rst/plugins/strategy.rst
|
.. _strategy_plugins:
Strategy plugins
================
.. contents::
:local:
:depth: 2
Strategy plugins control the flow of play execution by handling task and host scheduling. For more information on using strategy plugins and other ways to control execution order, see :ref:`playbooks_strategies`.
.. _enable_strategy:
Enabling strategy plugins
-------------------------
All strategy plugins shipped with Ansible are enabled by default. You can enable a custom strategy plugin by
putting it in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
.. _using_strategy:
Using strategy plugins
----------------------
Only one strategy plugin can be used in a play, but you can use different ones for each play in a playbook or ansible run. By default Ansible uses the :ref:`linear <linear_strategy>` plugin. You can change this default in Ansible :ref:`configuration <ansible_configuration_settings>` using an environment variable:
.. code-block:: shell
export ANSIBLE_STRATEGY=free
or in the `ansible.cfg` file:
.. code-block:: ini
[defaults]
strategy=linear
You can also specify the strategy plugin in the play via the :ref:`strategy keyword <playbook_keywords>` in a play::
- hosts: all
strategy: debug
tasks:
- copy: src=myhosts dest=/etc/hosts
notify: restart_tomcat
- package: name=tomcat state=present
handlers:
- name: restart_tomcat
service: name=tomcat state=restarted
.. _strategy_plugin_list:
Plugin list
-----------
You can use ``ansible-doc -t strategy -l`` to see the list of available plugins.
Use ``ansible-doc -t strategy <plugin name>`` to see plugin-specific specific documentation and examples.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`inventory_plugins`
Inventory plugins
:ref:`callback_plugins`
Callback plugins
:ref:`filter_plugins`
Filter plugins
:ref:`test_plugins`
Test plugins
:ref:`lookup_plugins`
Lookup plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,032 |
Docs: Add code-block wrappers to code examples in lookup.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `lookup.rst` file in the `docs/docsite/rst/plugins/` directory, there are 5 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep lookup.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/plugins/lookup.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79032
|
https://github.com/ansible/ansible/pull/79048
|
35700f57cc62a15d7f04a82eaae2193c65fb8570
|
680bf029b1c2430fab6988bc6cd8138d2d06a023
| 2022-10-05T11:23:43Z |
python
| 2022-10-05T16:55:41Z |
docs/docsite/rst/plugins/lookup.rst
|
.. _lookup_plugins:
Lookup plugins
==============
.. contents::
:local:
:depth: 2
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all :ref:`templating <playbooks_templating>`, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources. You can :ref:`create custom lookup plugins <developing_lookup_plugins>`.
.. note::
- Lookups are executed with a working directory relative to the role or play,
as opposed to local tasks, which are executed relative the executed script.
- Pass ``wantlist=True`` to lookups to use in Jinja2 template "for" loops.
- By default, lookup return values are marked as unsafe for security reasons. If you trust the outside source your lookup accesses, pass ``allow_unsafe=True`` to allow Jinja2 templates to evaluate lookup values.
.. warning::
- Some lookups pass arguments to a shell. When using variables from a remote/untrusted source, use the `|quote` filter to ensure safe usage.
.. _enabling_lookup:
Enabling lookup plugins
-----------------------
Ansible enables all lookup plugins it can find. You can activate a custom lookup by either dropping it into a ``lookup_plugins`` directory adjacent to your play, inside the ``plugins/lookup/`` directory of a collection you have installed, inside a standalone role, or in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
.. _using_lookup:
Using lookup plugins
--------------------
You can use lookup plugins anywhere you can use templating in Ansible: in a play, in variables file, or in a Jinja2 template for the :ref:`template <template_module>` module. For more information on using lookup plugins, see :ref:`playbooks_lookups`.
.. code-block:: YAML+Jinja
vars:
file_contents: "{{ lookup('file', 'path/to/file.txt') }}"
Lookups are an integral part of loops. Wherever you see ``with_``, the part after the underscore is the name of a lookup. For this reason, lookups are expected to output lists; for example, ``with_items`` uses the :ref:`items <items_lookup>` lookup::
tasks:
- name: count to 3
debug: msg={{ item }}
with_items: [1, 2, 3]
You can combine lookups with :ref:`filters <playbooks_filters>`, :ref:`tests <playbooks_tests>` and even each other to do some complex data generation and manipulation. For example::
tasks:
- name: valid but useless and over complicated chained lookups and filters
debug: msg="find the answer here:\n{{ lookup('url', 'https://google.com/search/?q=' + item|urlencode)|join(' ') }}"
with_nested:
- "{{ lookup('consul_kv', 'bcs/' + lookup('file', '/the/question') + ', host=localhost, port=2000')|shuffle }}"
- "{{ lookup('sequence', 'end=42 start=2 step=2')|map('log', 4)|list) }}"
- ['a', 'c', 'd', 'c']
.. versionadded:: 2.6
You can control how errors behave in all lookup plugins by setting ``errors`` to ``ignore``, ``warn``, or ``strict``. The default setting is ``strict``, which causes the task to fail if the lookup returns an error. For example:
To ignore lookup errors::
- name: if this file does not exist, I do not care .. file plugin itself warns anyway ...
debug: msg="{{ lookup('file', '/nosuchfile', errors='ignore') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
ok: [localhost] => {
"msg": ""
}
To get a warning instead of a failure::
- name: if this file does not exist, let me know, but continue
debug: msg="{{ lookup('file', '/nosuchfile', errors='warn') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
[WARNING]: An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile
ok: [localhost] => {
"msg": ""
}
To get a fatal error (the default)::
- name: if this file does not exist, FAIL (this is the default)
debug: msg="{{ lookup('file', '/nosuchfile', errors='strict') }}"
.. code-block:: ansible-output
[WARNING]: Unable to find '/nosuchfile' in expected paths (use -vvvvv to see paths)
fatal: [localhost]: FAILED! => {"msg": "An unhandled exception occurred while running the lookup plugin 'file'. Error was a <class 'ansible.errors.AnsibleError'>, original message: could not locate file in lookup: /nosuchfile"}
.. _query:
Forcing lookups to return lists: ``query`` and ``wantlist=True``
----------------------------------------------------------------
.. versionadded:: 2.5
In Ansible 2.5, a new Jinja2 function called ``query`` was added for invoking lookup plugins. The difference between ``lookup`` and ``query`` is largely that ``query`` will always return a list.
The default behavior of ``lookup`` is to return a string of comma separated values. ``lookup`` can be explicitly configured to return a list using ``wantlist=True``.
This feature provides an easier and more consistent interface for interacting with the new ``loop`` keyword, while maintaining backwards compatibility with other uses of ``lookup``.
The following examples are equivalent:
.. code-block:: jinja
lookup('dict', dict_variable, wantlist=True)
query('dict', dict_variable)
As demonstrated above, the behavior of ``wantlist=True`` is implicit when using ``query``.
Additionally, ``q`` was introduced as a shortform of ``query``:
.. code-block:: jinja
q('dict', dict_variable)
.. _lookup_plugins_list:
Plugin list
-----------
You can use ``ansible-doc -t lookup -l`` to see the list of available plugins. Use ``ansible-doc -t lookup <plugin name>`` to see specific documents and examples.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`inventory_plugins`
Ansible inventory plugins
:ref:`callback_plugins`
Ansible callback plugins
:ref:`filter_plugins`
Jinja2 filter plugins
:ref:`test_plugins`
Jinja2 test plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,032 |
Docs: Add code-block wrappers to code examples in lookup.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `lookup.rst` file in the `docs/docsite/rst/plugins/` directory, there are 5 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep lookup.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/plugins/lookup.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79032
|
https://github.com/ansible/ansible/pull/79048
|
35700f57cc62a15d7f04a82eaae2193c65fb8570
|
680bf029b1c2430fab6988bc6cd8138d2d06a023
| 2022-10-05T11:23:43Z |
python
| 2022-10-05T16:55:41Z |
docs/docsite/rst/plugins/strategy.rst
|
.. _strategy_plugins:
Strategy plugins
================
.. contents::
:local:
:depth: 2
Strategy plugins control the flow of play execution by handling task and host scheduling. For more information on using strategy plugins and other ways to control execution order, see :ref:`playbooks_strategies`.
.. _enable_strategy:
Enabling strategy plugins
-------------------------
All strategy plugins shipped with Ansible are enabled by default. You can enable a custom strategy plugin by
putting it in one of the lookup directory sources configured in :ref:`ansible.cfg <ansible_configuration_settings>`.
.. _using_strategy:
Using strategy plugins
----------------------
Only one strategy plugin can be used in a play, but you can use different ones for each play in a playbook or ansible run. By default Ansible uses the :ref:`linear <linear_strategy>` plugin. You can change this default in Ansible :ref:`configuration <ansible_configuration_settings>` using an environment variable:
.. code-block:: shell
export ANSIBLE_STRATEGY=free
or in the `ansible.cfg` file:
.. code-block:: ini
[defaults]
strategy=linear
You can also specify the strategy plugin in the play via the :ref:`strategy keyword <playbook_keywords>` in a play::
- hosts: all
strategy: debug
tasks:
- copy: src=myhosts dest=/etc/hosts
notify: restart_tomcat
- package: name=tomcat state=present
handlers:
- name: restart_tomcat
service: name=tomcat state=restarted
.. _strategy_plugin_list:
Plugin list
-----------
You can use ``ansible-doc -t strategy -l`` to see the list of available plugins.
Use ``ansible-doc -t strategy <plugin name>`` to see plugin-specific specific documentation and examples.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`inventory_plugins`
Inventory plugins
:ref:`callback_plugins`
Callback plugins
:ref:`filter_plugins`
Filter plugins
:ref:`test_plugins`
Test plugins
:ref:`lookup_plugins`
Lookup plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,023 |
meta flush_handlers doesn't work in role
|
### Summary
Triggering handlers with `ansible.builtin.meta: flush_handlers` in a role results in an error
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters/ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ ├── [drwxr-xr-x] handlers
│ │ └── [-rw-r--r--] main.yml
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
3 directories, 3 files
```
```
$ cat test.yml
- hosts: localhost
pre_tasks:
- ansible.builtin.command: /bin/true
notify: do nothing
handlers:
- name: do nothing
ansible.builtin.debug:
msg: hello
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: /bin/true
notify: noop
- ansible.builtin.meta: flush_handlers
- ansible.builtin.command: /bin/true
```
```
$ cat testrole/handlers/main.yml
---
- name: noop
ansible.builtin.debug:
msg: world
```
### Expected Results
I expect the play to succeed and trigger the handler
```
$ ansible-playbook test.yml -l localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
RUNNING HANDLER [testrole : noop] **********************************************************************
ok: [localhost] => {
"msg": "world"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook test.yml -l localhost [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79023
|
https://github.com/ansible/ansible/pull/79057
|
f8f1c6a6b5d97df779ff4d427cebe41427533dd9
|
e1daaae42af1a4e465edbdad4bb3c6dd7e7110d5
| 2022-10-04T15:45:20Z |
python
| 2022-10-06T13:26:49Z |
changelogs/fragments/79023-fix-flush_handlers-fqcn.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,023 |
meta flush_handlers doesn't work in role
|
### Summary
Triggering handlers with `ansible.builtin.meta: flush_handlers` in a role results in an error
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters/ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ ├── [drwxr-xr-x] handlers
│ │ └── [-rw-r--r--] main.yml
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
3 directories, 3 files
```
```
$ cat test.yml
- hosts: localhost
pre_tasks:
- ansible.builtin.command: /bin/true
notify: do nothing
handlers:
- name: do nothing
ansible.builtin.debug:
msg: hello
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: /bin/true
notify: noop
- ansible.builtin.meta: flush_handlers
- ansible.builtin.command: /bin/true
```
```
$ cat testrole/handlers/main.yml
---
- name: noop
ansible.builtin.debug:
msg: world
```
### Expected Results
I expect the play to succeed and trigger the handler
```
$ ansible-playbook test.yml -l localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
RUNNING HANDLER [testrole : noop] **********************************************************************
ok: [localhost] => {
"msg": "world"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook test.yml -l localhost [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79023
|
https://github.com/ansible/ansible/pull/79057
|
f8f1c6a6b5d97df779ff4d427cebe41427533dd9
|
e1daaae42af1a4e465edbdad4bb3c6dd7e7110d5
| 2022-10-04T15:45:20Z |
python
| 2022-10-06T13:26:49Z |
lib/ansible/plugins/strategy/linear.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: linear
short_description: Executes tasks in a linear fashion
description:
- Task execution is in lockstep per host batch as defined by C(serial) (default all).
Up to the fork limit of hosts will execute each task at the same time and then
the next series of hosts until the batch is done, before going on to the next task.
version_added: "2.0"
notes:
- This was the default Ansible behaviour before 'strategy plugins' were introduced in 2.0.
author: Ansible Core Team
'''
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleAssertionError, AnsibleParserError
from ansible.executor.play_iterator import IteratingStates, FailedStates
from ansible.module_utils._text import to_text
from ansible.playbook.handler import Handler
from ansible.playbook.included_file import IncludedFile
from ansible.playbook.task import Task
from ansible.plugins.loader import action_loader
from ansible.plugins.strategy import StrategyBase
from ansible.template import Templar
from ansible.utils.display import Display
display = Display()
class StrategyModule(StrategyBase):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# used for the lockstep to indicate to run handlers
self._in_handlers = False
def _get_next_task_lockstep(self, hosts, iterator):
'''
Returns a list of (host, task) tuples, where the task may
be a noop task to keep the iterator in lock step across
all hosts.
'''
noop_task = Task()
noop_task.action = 'meta'
noop_task.args['_raw_params'] = 'noop'
noop_task.implicit = True
noop_task.set_loader(iterator._play._loader)
state_task_per_host = {}
for host in hosts:
state, task = iterator.get_next_task_for_host(host, peek=True)
if task is not None:
state_task_per_host[host] = state, task
if not state_task_per_host:
return [(h, None) for h in hosts]
if self._in_handlers and not any(filter(
lambda rs: rs == IteratingStates.HANDLERS,
(s.run_state for s, _ in state_task_per_host.values()))
):
self._in_handlers = False
if self._in_handlers:
lowest_cur_handler = min(
s.cur_handlers_task for s, t in state_task_per_host.values()
if s.run_state == IteratingStates.HANDLERS
)
else:
task_uuids = [t._uuid for s, t in state_task_per_host.values()]
_loop_cnt = 0
while _loop_cnt <= 1:
try:
cur_task = iterator.all_tasks[iterator.cur_task]
except IndexError:
# pick up any tasks left after clear_host_errors
iterator.cur_task = 0
_loop_cnt += 1
else:
iterator.cur_task += 1
if cur_task._uuid in task_uuids:
break
else:
# prevent infinite loop
raise AnsibleAssertionError(
'BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.'
)
host_tasks = []
for host, (state, task) in state_task_per_host.items():
if ((self._in_handlers and lowest_cur_handler == state.cur_handlers_task) or
(not self._in_handlers and cur_task._uuid == task._uuid)):
iterator.set_state_for_host(host.name, state)
host_tasks.append((host, task))
else:
host_tasks.append((host, noop_task))
# once hosts synchronize on 'flush_handlers' lockstep enters
# '_in_handlers' phase where handlers are run instead of tasks
# until at least one host is in IteratingStates.HANDLERS
if (not self._in_handlers and cur_task.action == 'meta' and
cur_task.args.get('_raw_params') == 'flush_handlers'):
self._in_handlers = True
return host_tasks
def run(self, iterator, play_context):
'''
The linear strategy is simple - get the next task and queue
it for all hosts, then wait for the queue to drain before
moving on to the next task
'''
# iterate over each task, while there is one left to run
result = self._tqm.RUN_OK
work_to_do = True
self._set_hosts_cache(iterator._play)
while work_to_do and not self._tqm._terminated:
try:
display.debug("getting the remaining hosts for this loop")
hosts_left = self.get_hosts_left(iterator)
display.debug("done getting the remaining hosts for this loop")
# queue up this task for each host in the inventory
callback_sent = False
work_to_do = False
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
# skip control
skip_rest = False
choose_step = True
# flag set if task is set to any_errors_fatal
any_errors_fatal = False
results = []
for (host, task) in host_tasks:
if not task:
continue
if self._tqm._terminated:
break
run_once = False
work_to_do = True
# check to see if this task should be skipped, due to it being a member of a
# role which has already run (and whether that role allows duplicate execution)
if not isinstance(task, Handler) and task._role and task._role.has_run(host):
# If there is no metadata, the default behavior is to not allow duplicates,
# if there is metadata, check to see if the allow_duplicates flag was set to true
if task._role._metadata is None or task._role._metadata and not task._role._metadata.allow_duplicates:
display.debug("'%s' skipped because role has already run" % task)
continue
display.debug("getting variables")
task_vars = self._variable_manager.get_vars(play=iterator._play, host=host, task=task,
_hosts=self._hosts_cache, _hosts_all=self._hosts_cache_all)
self.add_tqm_variables(task_vars, play=iterator._play)
templar = Templar(loader=self._loader, variables=task_vars)
display.debug("done getting variables")
# test to see if the task across all hosts points to an action plugin which
# sets BYPASS_HOST_LOOP to true, or if it has run_once enabled. If so, we
# will only send this task to the first host in the list.
task_action = templar.template(task.action)
try:
action = action_loader.get(task_action, class_only=True, collection_list=task.collections)
except KeyError:
# we don't care here, because the action may simply not have a
# corresponding action plugin
action = None
if task_action in C._ACTION_META:
# for the linear strategy, we run meta tasks just once and for
# all hosts currently being iterated over rather than one host
results.extend(self._execute_meta(task, play_context, iterator, host))
if task.args.get('_raw_params', None) not in ('noop', 'reset_connection', 'end_host', 'role_complete', 'flush_handlers'):
run_once = True
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
else:
# handle step if needed, skip meta actions as they are used internally
if self._step and choose_step:
if self._take_step(task):
choose_step = False
else:
skip_rest = True
break
run_once = templar.template(task.run_once) or action and getattr(action, 'BYPASS_HOST_LOOP', False)
if (task.any_errors_fatal or run_once) and not task.ignore_errors:
any_errors_fatal = True
if not callback_sent:
display.debug("sending task start callback, copying the task so we can template it temporarily")
saved_name = task.name
display.debug("done copying, going to template now")
try:
task.name = to_text(templar.template(task.name, fail_on_undefined=False), nonstring='empty')
display.debug("done templating")
except Exception:
# just ignore any errors during task name templating,
# we don't care if it just shows the raw name
display.debug("templating failed for some reason")
display.debug("here goes the callback...")
if isinstance(task, Handler):
self._tqm.send_callback('v2_playbook_on_handler_task_start', task)
else:
self._tqm.send_callback('v2_playbook_on_task_start', task, is_conditional=False)
task.name = saved_name
callback_sent = True
display.debug("sending task start callback")
self._blocked_hosts[host.get_name()] = True
self._queue_task(host, task, task_vars, play_context)
del task_vars
# if we're bypassing the host loop, break out now
if run_once:
break
results.extend(self._process_pending_results(iterator, max_passes=max(1, int(len(self._tqm._workers) * 0.1))))
# go to next host/task group
if skip_rest:
continue
display.debug("done queuing things up, now waiting for results queue to drain")
if self._pending_results > 0:
results.extend(self._wait_on_pending_results(iterator))
self.update_active_connections(results)
included_files = IncludedFile.process_include_results(
results,
iterator=iterator,
loader=self._loader,
variable_manager=self._variable_manager
)
if len(included_files) > 0:
display.debug("we have included files to process")
display.debug("generating all_blocks data")
all_blocks = dict((host, []) for host in hosts_left)
display.debug("done generating all_blocks data")
included_tasks = []
failed_includes_hosts = set()
for included_file in included_files:
display.debug("processing included file: %s" % included_file._filename)
is_handler = False
try:
if included_file._is_role:
new_ir = self._copy_included_file(included_file)
new_blocks, handler_blocks = new_ir.get_block_list(
play=iterator._play,
variable_manager=self._variable_manager,
loader=self._loader,
)
else:
is_handler = isinstance(included_file._task, Handler)
new_blocks = self._load_included_file(included_file, iterator=iterator, is_handler=is_handler)
# let PlayIterator know about any new handlers included via include_role or
# import_role within include_role/include_taks
iterator.handlers = [h for b in iterator._play.handlers for h in b.block]
display.debug("iterating over new_blocks loaded from include file")
for new_block in new_blocks:
if is_handler:
for task in new_block.block:
task.notified_hosts = included_file._hosts[:]
final_block = new_block
else:
task_vars = self._variable_manager.get_vars(
play=iterator._play,
task=new_block.get_first_parent_include(),
_hosts=self._hosts_cache,
_hosts_all=self._hosts_cache_all,
)
display.debug("filtering new block on tags")
final_block = new_block.filter_tagged_tasks(task_vars)
display.debug("done filtering new block on tags")
included_tasks.extend(final_block.get_tasks())
for host in hosts_left:
# handlers are included regardless of _hosts so noop
# tasks do not have to be created for lockstep,
# not notified handlers are then simply skipped
# in the PlayIterator
if host in included_file._hosts or is_handler:
all_blocks[host].append(final_block)
display.debug("done iterating over new_blocks loaded from include file")
except AnsibleParserError:
raise
except AnsibleError as e:
if included_file._is_role:
# include_role does not have on_include callback so display the error
display.error(to_text(e), wrap_text=False)
for r in included_file._results:
r._result['failed'] = True
failed_includes_hosts.add(r._host)
continue
for host in failed_includes_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
# finally go through all of the hosts and append the
# accumulated blocks to their list of tasks
display.debug("extending task lists for all hosts with included blocks")
for host in hosts_left:
iterator.add_tasks(host, all_blocks[host])
iterator.all_tasks[iterator.cur_task:iterator.cur_task] = included_tasks
display.debug("done extending task lists")
display.debug("done processing included files")
display.debug("results queue empty")
display.debug("checking for any_errors_fatal")
failed_hosts = []
unreachable_hosts = []
for res in results:
# execute_meta() does not set 'failed' in the TaskResult
# so we skip checking it with the meta tasks and look just at the iterator
if (res.is_failed() or res._task.action in C._ACTION_META) and iterator.is_failed(res._host):
failed_hosts.append(res._host.name)
elif res.is_unreachable():
unreachable_hosts.append(res._host.name)
# if any_errors_fatal and we had an error, mark all hosts as failed
if any_errors_fatal and (len(failed_hosts) > 0 or len(unreachable_hosts) > 0):
dont_fail_states = frozenset([IteratingStates.RESCUE, IteratingStates.ALWAYS])
for host in hosts_left:
(s, _) = iterator.get_next_task_for_host(host, peek=True)
# the state may actually be in a child state, use the get_active_state()
# method in the iterator to figure out the true active state
s = iterator.get_active_state(s)
if s.run_state not in dont_fail_states or \
s.run_state == IteratingStates.RESCUE and s.fail_state & FailedStates.RESCUE != 0:
self._tqm._failed_hosts[host.name] = True
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug("done checking for any_errors_fatal")
display.debug("checking for max_fail_percentage")
if iterator._play.max_fail_percentage is not None and len(results) > 0:
percentage = iterator._play.max_fail_percentage / 100.0
if (len(self._tqm._failed_hosts) / iterator.batch_size) > percentage:
for host in hosts_left:
# don't double-mark hosts, or the iterator will potentially
# fail them out of the rescue/always states
if host.name not in failed_hosts:
self._tqm._failed_hosts[host.name] = True
iterator.mark_host_failed(host)
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
result |= self._tqm.RUN_FAILED_BREAK_PLAY
display.debug('(%s failed / %s total )> %s max fail' % (len(self._tqm._failed_hosts), iterator.batch_size, percentage))
display.debug("done checking for max_fail_percentage")
display.debug("checking to see if all hosts have failed and the running result is not ok")
if result != self._tqm.RUN_OK and len(self._tqm._failed_hosts) >= len(hosts_left):
display.debug("^ not ok, so returning result now")
self._tqm.send_callback('v2_playbook_on_no_hosts_remaining')
return result
display.debug("done checking to see if all hosts have failed")
except (IOError, EOFError) as e:
display.debug("got IOError/EOFError in task loop: %s" % e)
# most likely an abort, return failed
return self._tqm.RUN_UNKNOWN_ERROR
# run the base class run() method, which executes the cleanup function
# and runs any outstanding handlers which have been triggered
return super(StrategyModule, self).run(iterator, play_context, result)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,023 |
meta flush_handlers doesn't work in role
|
### Summary
Triggering handlers with `ansible.builtin.meta: flush_handlers` in a role results in an error
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters/ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ ├── [drwxr-xr-x] handlers
│ │ └── [-rw-r--r--] main.yml
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
3 directories, 3 files
```
```
$ cat test.yml
- hosts: localhost
pre_tasks:
- ansible.builtin.command: /bin/true
notify: do nothing
handlers:
- name: do nothing
ansible.builtin.debug:
msg: hello
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: /bin/true
notify: noop
- ansible.builtin.meta: flush_handlers
- ansible.builtin.command: /bin/true
```
```
$ cat testrole/handlers/main.yml
---
- name: noop
ansible.builtin.debug:
msg: world
```
### Expected Results
I expect the play to succeed and trigger the handler
```
$ ansible-playbook test.yml -l localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
RUNNING HANDLER [testrole : noop] **********************************************************************
ok: [localhost] => {
"msg": "world"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook test.yml -l localhost [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79023
|
https://github.com/ansible/ansible/pull/79057
|
f8f1c6a6b5d97df779ff4d427cebe41427533dd9
|
e1daaae42af1a4e465edbdad4bb3c6dd7e7110d5
| 2022-10-04T15:45:20Z |
python
| 2022-10-06T13:26:49Z |
test/integration/targets/handlers/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_FORCE_HANDLERS
ANSIBLE_FORCE_HANDLERS=false
# simple handler test
ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
# simple from_handlers test
ansible-playbook from_handlers.yml -i inventory.handlers -v "$@" --tags scenario1
ansible-playbook test_listening_handlers.yml -i inventory.handlers -v "$@"
[ "$(ansible-playbook test_handlers.yml -i inventory.handlers -v "$@" --tags scenario2 -l A \
| grep -E -o 'RUNNING HANDLER \[test_handlers : .*]')" = "RUNNING HANDLER [test_handlers : test handler]" ]
# Test forcing handlers using the linear and free strategy
for strategy in linear free; do
export ANSIBLE_STRATEGY=$strategy
# Not forcing, should only run on successful host
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
# Forcing from command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from command line, should only run later tasks on unfailed hosts
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers \
| grep -E -o CALLED_TASK_. | sort | uniq | xargs)" = "CALLED_TASK_B CALLED_TASK_D CALLED_TASK_E" ]
# Forcing from command line, should call handlers even if all hosts fail
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal --force-handlers -e fail_all=yes \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing from ansible.cfg
[ "$(ANSIBLE_FORCE_HANDLERS=true ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags normal \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing true in play
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_true_in_play \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_A CALLED_HANDLER_B" ]
# Forcing false in play, which overrides command line
[ "$(ansible-playbook test_force_handlers.yml -i inventory.handlers -v "$@" --tags force_false_in_play --force-handlers \
| grep -E -o CALLED_HANDLER_. | sort | uniq | xargs)" = "CALLED_HANDLER_B" ]
unset ANSIBLE_STRATEGY
done
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags playbook_include_handlers \
| grep -E -o 'RUNNING HANDLER \[.*]')" = "RUNNING HANDLER [test handler]" ]
[ "$(ansible-playbook test_handlers_include.yml -i ../../inventory -v "$@" --tags role_include_handlers \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include : .*]')" = "RUNNING HANDLER [test_handlers_include : test handler]" ]
[ "$(ansible-playbook test_handlers_include_role.yml -i ../../inventory -v "$@" \
| grep -E -o 'RUNNING HANDLER \[test_handlers_include_role : .*]')" = "RUNNING HANDLER [test_handlers_include_role : test handler]" ]
# Notify handler listen
ansible-playbook test_handlers_listen.yml -i inventory.handlers -v "$@"
# Notify inexistent handlers results in error
set +e
result="$(ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "ERROR! The requested handler 'notify_inexistent_handler' was not found in either the main handlers list nor in the listening handlers list" <<< "$result"
# Notify inexistent handlers without errors when ANSIBLE_ERROR_ON_MISSING_HANDLER=false
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_handlers_inexistent_notify.yml -i inventory.handlers -v "$@"
ANSIBLE_ERROR_ON_MISSING_HANDLER=false ansible-playbook test_templating_in_handlers.yml -v "$@"
# https://github.com/ansible/ansible/issues/36649
output_dir=/tmp
set +e
result="$(ansible-playbook test_handlers_any_errors_fatal.yml -e output_dir=$output_dir -i inventory.handlers -v "$@" 2>&1)"
set -e
[ ! -f $output_dir/should_not_exist_B ] || (rm -f $output_dir/should_not_exist_B && exit 1)
# https://github.com/ansible/ansible/issues/47287
[ "$(ansible-playbook test_handlers_including_task.yml -i ../../inventory -v "$@" | grep -E -o 'failed=[0-9]+')" = "failed=0" ]
# https://github.com/ansible/ansible/issues/71222
ansible-playbook test_role_handlers_including_tasks.yml -i ../../inventory -v "$@"
# https://github.com/ansible/ansible/issues/27237
set +e
result="$(ansible-playbook test_handlers_template_run_once.yml -i inventory.handlers "$@" 2>&1)"
set -e
grep -q "handler A" <<< "$result"
grep -q "handler B" <<< "$result"
# Test an undefined variable in another handler name isn't a failure
ansible-playbook 58841.yml "$@" --tags lazy_evaluation 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "1" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test templating a handler name with a defined variable
ansible-playbook 58841.yml "$@" --tags evaluation_time -e test_var=myvar | tee out.txt ; cat out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "1" ]
# Test the handler is not found when the variable is undefined
ansible-playbook 58841.yml "$@" --tags evaluation_time 2>&1 | tee out.txt ; cat out.txt
grep out.txt -e "ERROR! The requested handler 'handler name with myvar' was not found"
grep out.txt -e "\[WARNING\]: Handler 'handler name with {{ test_var }}' is unusable"
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
[ "$(grep out.txt -ce 'handler with var ran')" = "0" ]
# Test include_role and import_role cannot be used as handlers
ansible-playbook test_role_as_handler.yml "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! Using 'include_role' as a handler is not supported."
# Test notifying a handler from within include_tasks does not work anymore
ansible-playbook test_notify_included.yml "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'I was included')" = "1" ]
grep out.txt -e "ERROR! The requested handler 'handler_from_include' was not found in either the main handlers list nor in the listening handlers list"
ansible-playbook test_handlers_meta.yml -i inventory.handlers -vv "$@" | tee out.txt
[ "$(grep out.txt -ce 'RUNNING HANDLER \[noop_handler\]')" = "1" ]
[ "$(grep out.txt -ce 'META: noop')" = "1" ]
# https://github.com/ansible/ansible/issues/46447
set +e
test "$(ansible-playbook 46447.yml -i inventory.handlers -vv "$@" 2>&1 | grep -c 'SHOULD NOT GET HERE')"
set -e
# https://github.com/ansible/ansible/issues/52561
ansible-playbook 52561.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler1 ran')" = "1" ]
# Test flush_handlers meta task does not imply any_errors_fatal
ansible-playbook 54991.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "4" ]
ansible-playbook order.yml -i inventory.handlers "$@" 2>&1
set +e
ansible-playbook order.yml --force-handlers -e test_force_handlers=true -i inventory.handlers "$@" 2>&1
set -e
ansible-playbook include_handlers_fail_force.yml --force-handlers -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'included handler ran')" = "1" ]
ansible-playbook test_flush_handlers_as_handler.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
grep out.txt -e "ERROR! flush_handlers cannot be used as a handler"
ansible-playbook test_skip_flush.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran')" = "0" ]
ansible-playbook test_flush_in_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'handler ran in rescue')" = "1" ]
[ "$(grep out.txt -ce 'handler ran in always')" = "2" ]
[ "$(grep out.txt -ce 'lockstep works')" = "2" ]
ansible-playbook test_handlers_infinite_loop.yml -i inventory.handlers "$@" 2>&1
ansible-playbook test_flush_handlers_rescue_always.yml -i inventory.handlers "$@" 2>&1 | tee out.txt
[ "$(grep out.txt -ce 'rescue ran')" = "1" ]
[ "$(grep out.txt -ce 'always ran')" = "2" ]
[ "$(grep out.txt -ce 'should run for both hosts')" = "2" ]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,023 |
meta flush_handlers doesn't work in role
|
### Summary
Triggering handlers with `ansible.builtin.meta: flush_handlers` in a role results in an error
### Issue Type
Bug Report
### Component Name
meta
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters/ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ ├── [drwxr-xr-x] handlers
│ │ └── [-rw-r--r--] main.yml
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
3 directories, 3 files
```
```
$ cat test.yml
- hosts: localhost
pre_tasks:
- ansible.builtin.command: /bin/true
notify: do nothing
handlers:
- name: do nothing
ansible.builtin.debug:
msg: hello
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: /bin/true
notify: noop
- ansible.builtin.meta: flush_handlers
- ansible.builtin.command: /bin/true
```
```
$ cat testrole/handlers/main.yml
---
- name: noop
ansible.builtin.debug:
msg: world
```
### Expected Results
I expect the play to succeed and trigger the handler
```
$ ansible-playbook test.yml -l localhost
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
RUNNING HANDLER [testrole : noop] **********************************************************************
ok: [localhost] => {
"msg": "world"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
PLAY RECAP *********************************************************************************************
localhost : ok=6 changed=3 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
$ ansible-playbook test.yml -l localhost [WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [Gathering Facts] *********************************************************************************
ok: [localhost]
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
RUNNING HANDLER [do nothing] ***************************************************************************
ok: [localhost] => {
"msg": "hello"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.meta] *****************************************************************
ERROR! BUG: There seems to be a mismatch between tasks in PlayIterator and HostStates.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79023
|
https://github.com/ansible/ansible/pull/79057
|
f8f1c6a6b5d97df779ff4d427cebe41427533dd9
|
e1daaae42af1a4e465edbdad4bb3c6dd7e7110d5
| 2022-10-04T15:45:20Z |
python
| 2022-10-06T13:26:49Z |
test/integration/targets/handlers/test_fqcn_meta_flush_handlers.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
changelogs/fragments/79021-dont-squash-in-validate.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
lib/ansible/playbook/base.py
|
# Copyright: (c) 2012-2014, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import itertools
import operator
import os
from copy import copy as shallowcopy
from jinja2.exceptions import UndefinedError
from ansible import constants as C
from ansible import context
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils._text import to_text, to_native
from ansible.parsing.dataloader import DataLoader
from ansible.playbook.attribute import Attribute, FieldAttribute, ConnectionFieldAttribute, NonInheritableFieldAttribute
from ansible.plugins.loader import module_loader, action_loader
from ansible.utils.collection_loader._collection_finder import _get_collection_metadata, AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
from ansible.utils.vars import combine_vars, isidentifier, get_unique_id
display = Display()
def _validate_action_group_metadata(action, found_group_metadata, fq_group_name):
valid_metadata = {
'extend_group': {
'types': (list, string_types,),
'errortype': 'list',
},
}
metadata_warnings = []
validate = C.VALIDATE_ACTION_GROUP_METADATA
metadata_only = isinstance(action, dict) and 'metadata' in action and len(action) == 1
if validate and not metadata_only:
found_keys = ', '.join(sorted(list(action)))
metadata_warnings.append("The only expected key is metadata, but got keys: {keys}".format(keys=found_keys))
elif validate:
if found_group_metadata:
metadata_warnings.append("The group contains multiple metadata entries.")
if not isinstance(action['metadata'], dict):
metadata_warnings.append("The metadata is not a dictionary. Got {metadata}".format(metadata=action['metadata']))
else:
unexpected_keys = set(action['metadata'].keys()) - set(valid_metadata.keys())
if unexpected_keys:
metadata_warnings.append("The metadata contains unexpected keys: {0}".format(', '.join(unexpected_keys)))
unexpected_types = []
for field, requirement in valid_metadata.items():
if field not in action['metadata']:
continue
value = action['metadata'][field]
if not isinstance(value, requirement['types']):
unexpected_types.append("%s is %s (expected type %s)" % (field, value, requirement['errortype']))
if unexpected_types:
metadata_warnings.append("The metadata contains unexpected key types: {0}".format(', '.join(unexpected_types)))
if metadata_warnings:
metadata_warnings.insert(0, "Invalid metadata was found for action_group {0} while loading module_defaults.".format(fq_group_name))
display.warning(" ".join(metadata_warnings))
class FieldAttributeBase:
@classmethod
@property
def fattributes(cls):
# FIXME is this worth caching?
fattributes = {}
for class_obj in reversed(cls.__mro__):
for name, attr in list(class_obj.__dict__.items()):
if not isinstance(attr, Attribute):
continue
fattributes[name] = attr
if attr.alias:
setattr(class_obj, attr.alias, attr)
fattributes[attr.alias] = attr
return fattributes
def __init__(self):
# initialize the data loader and variable manager, which will be provided
# later when the object is actually loaded
self._loader = None
self._variable_manager = None
# other internal params
self._validated = False
self._squashed = False
self._finalized = False
# every object gets a random uuid:
self._uuid = get_unique_id()
# init vars, avoid using defaults in field declaration as it lives across plays
self.vars = dict()
@property
def finalized(self):
return self._finalized
def dump_me(self, depth=0):
''' this is never called from production code, it is here to be used when debugging as a 'complex print' '''
if depth == 0:
display.debug("DUMPING OBJECT ------------------------------------------------------")
display.debug("%s- %s (%s, id=%s)" % (" " * depth, self.__class__.__name__, self, id(self)))
if hasattr(self, '_parent') and self._parent:
self._parent.dump_me(depth + 2)
dep_chain = self._parent.get_dep_chain()
if dep_chain:
for dep in dep_chain:
dep.dump_me(depth + 2)
if hasattr(self, '_play') and self._play:
self._play.dump_me(depth + 2)
def preprocess_data(self, ds):
''' infrequently used method to do some pre-processing of legacy terms '''
return ds
def load_data(self, ds, variable_manager=None, loader=None):
''' walk the input datastructure and assign any values '''
if ds is None:
raise AnsibleAssertionError('ds (%s) should not be None but it is.' % ds)
# cache the datastructure internally
setattr(self, '_ds', ds)
# the variable manager class is used to manage and merge variables
# down to a single dictionary for reference in templating, etc.
self._variable_manager = variable_manager
# the data loader class is used to parse data from strings and files
if loader is not None:
self._loader = loader
else:
self._loader = DataLoader()
# call the preprocess_data() function to massage the data into
# something we can more easily parse, and then call the validation
# function on it to ensure there are no incorrect key values
ds = self.preprocess_data(ds)
self._validate_attributes(ds)
# Walk all attributes in the class. We sort them based on their priority
# so that certain fields can be loaded before others, if they are dependent.
for name, attr in sorted(self.fattributes.items(), key=operator.itemgetter(1)):
# copy the value over unless a _load_field method is defined
if name in ds:
method = getattr(self, '_load_%s' % name, None)
if method:
setattr(self, name, method(name, ds[name]))
else:
setattr(self, name, ds[name])
# run early, non-critical validation
self.validate()
# return the constructed object
return self
def get_ds(self):
try:
return getattr(self, '_ds')
except AttributeError:
return None
def get_loader(self):
return self._loader
def get_variable_manager(self):
return self._variable_manager
def _post_validate_debugger(self, attr, value, templar):
value = templar.template(value)
valid_values = frozenset(('always', 'on_failed', 'on_unreachable', 'on_skipped', 'never'))
if value and isinstance(value, string_types) and value not in valid_values:
raise AnsibleParserError("'%s' is not a valid value for debugger. Must be one of %s" % (value, ', '.join(valid_values)), obj=self.get_ds())
return value
def _validate_attributes(self, ds):
'''
Ensures that there are no keys in the datastructure which do
not map to attributes for this object.
'''
valid_attrs = frozenset(self.fattributes)
for key in ds:
if key not in valid_attrs:
raise AnsibleParserError("'%s' is not a valid attribute for a %s" % (key, self.__class__.__name__), obj=ds)
def validate(self, all_vars=None):
''' validation that is done at parse time, not load time '''
all_vars = {} if all_vars is None else all_vars
if not self._validated:
# walk all fields in the object
for (name, attribute) in self.fattributes.items():
# run validator only if present
method = getattr(self, '_validate_%s' % name, None)
if method:
method(attribute, name, getattr(self, name))
else:
# and make sure the attribute is of the type it should be
value = getattr(self, name)
if value is not None:
if attribute.isa == 'string' and isinstance(value, (list, dict)):
raise AnsibleParserError(
"The field '%s' is supposed to be a string type,"
" however the incoming data structure is a %s" % (name, type(value)), obj=self.get_ds()
)
self._validated = True
def _load_module_defaults(self, name, value):
if value is None:
return
if not isinstance(value, list):
value = [value]
validated_module_defaults = []
for defaults_dict in value:
if not isinstance(defaults_dict, dict):
raise AnsibleParserError(
"The field 'module_defaults' is supposed to be a dictionary or list of dictionaries, "
"the keys of which must be static action, module, or group names. Only the values may contain "
"templates. For example: {'ping': \"{{ ping_defaults }}\"}"
)
validated_defaults_dict = {}
for defaults_entry, defaults in defaults_dict.items():
# module_defaults do not use the 'collections' keyword, so actions and
# action_groups that are not fully qualified are part of the 'ansible.legacy'
# collection. Update those entries here, so module_defaults contains
# fully qualified entries.
if defaults_entry.startswith('group/'):
group_name = defaults_entry.split('group/')[-1]
# The resolved action_groups cache is associated saved on the current Play
if self.play is not None:
group_name, dummy = self._resolve_group(group_name)
defaults_entry = 'group/' + group_name
validated_defaults_dict[defaults_entry] = defaults
else:
if len(defaults_entry.split('.')) < 3:
defaults_entry = 'ansible.legacy.' + defaults_entry
resolved_action = self._resolve_action(defaults_entry)
if resolved_action:
validated_defaults_dict[resolved_action] = defaults
# If the defaults_entry is an ansible.legacy plugin, these defaults
# are inheritable by the 'ansible.builtin' subset, but are not
# required to exist.
if defaults_entry.startswith('ansible.legacy.'):
resolved_action = self._resolve_action(
defaults_entry.replace('ansible.legacy.', 'ansible.builtin.'),
mandatory=False
)
if resolved_action:
validated_defaults_dict[resolved_action] = defaults
validated_module_defaults.append(validated_defaults_dict)
return validated_module_defaults
@property
def play(self):
if hasattr(self, '_play'):
play = self._play
elif hasattr(self, '_parent') and hasattr(self._parent, '_play'):
play = self._parent._play
else:
play = self
if play.__class__.__name__ != 'Play':
# Should never happen, but handle gracefully by returning None, just in case
return None
return play
def _resolve_group(self, fq_group_name, mandatory=True):
if not AnsibleCollectionRef.is_valid_fqcr(fq_group_name):
collection_name = 'ansible.builtin'
fq_group_name = collection_name + '.' + fq_group_name
else:
collection_name = '.'.join(fq_group_name.split('.')[0:2])
# Check if the group has already been resolved and cached
if fq_group_name in self.play._group_actions:
return fq_group_name, self.play._group_actions[fq_group_name]
try:
action_groups = _get_collection_metadata(collection_name).get('action_groups', {})
except ValueError:
if not mandatory:
display.vvvvv("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
return fq_group_name, []
raise AnsibleParserError("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
# The collection may or may not use the fully qualified name
# Don't fail if the group doesn't exist in the collection
resource_name = fq_group_name.split(collection_name + '.')[-1]
action_group = action_groups.get(
fq_group_name,
action_groups.get(resource_name)
)
if action_group is None:
if not mandatory:
display.vvvvv("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
return fq_group_name, []
raise AnsibleParserError("Error loading module_defaults: could not resolve the module_defaults group %s" % fq_group_name)
resolved_actions = []
include_groups = []
found_group_metadata = False
for action in action_group:
# Everything should be a string except the metadata entry
if not isinstance(action, string_types):
_validate_action_group_metadata(action, found_group_metadata, fq_group_name)
if isinstance(action['metadata'], dict):
found_group_metadata = True
include_groups = action['metadata'].get('extend_group', [])
if isinstance(include_groups, string_types):
include_groups = [include_groups]
if not isinstance(include_groups, list):
# Bad entries may be a warning above, but prevent tracebacks by setting it back to the acceptable type.
include_groups = []
continue
# The collection may or may not use the fully qualified name.
# If not, it's part of the current collection.
if not AnsibleCollectionRef.is_valid_fqcr(action):
action = collection_name + '.' + action
resolved_action = self._resolve_action(action, mandatory=False)
if resolved_action:
resolved_actions.append(resolved_action)
for action in resolved_actions:
if action not in self.play._action_groups:
self.play._action_groups[action] = []
self.play._action_groups[action].append(fq_group_name)
self.play._group_actions[fq_group_name] = resolved_actions
# Resolve extended groups last, after caching the group in case they recursively refer to each other
for include_group in include_groups:
if not AnsibleCollectionRef.is_valid_fqcr(include_group):
include_group = collection_name + '.' + include_group
dummy, group_actions = self._resolve_group(include_group, mandatory=False)
for action in group_actions:
if action not in self.play._action_groups:
self.play._action_groups[action] = []
self.play._action_groups[action].append(fq_group_name)
self.play._group_actions[fq_group_name].extend(group_actions)
resolved_actions.extend(group_actions)
return fq_group_name, resolved_actions
def _resolve_action(self, action_name, mandatory=True):
context = module_loader.find_plugin_with_context(action_name)
if context.resolved and not context.action_plugin:
prefer = action_loader.find_plugin_with_context(action_name)
if prefer.resolved:
context = prefer
elif not context.resolved:
context = action_loader.find_plugin_with_context(action_name)
if context.resolved:
return context.resolved_fqcn
if mandatory:
raise AnsibleParserError("Could not resolve action %s in module_defaults" % action_name)
display.vvvvv("Could not resolve action %s in module_defaults" % action_name)
def squash(self):
'''
Evaluates all attributes and sets them to the evaluated version,
so that all future accesses of attributes do not need to evaluate
parent attributes.
'''
if not self._squashed:
for name in self.fattributes:
setattr(self, name, getattr(self, name))
self._squashed = True
def copy(self):
'''
Create a copy of this object and return it.
'''
try:
new_me = self.__class__()
except RuntimeError as e:
raise AnsibleError("Exceeded maximum object depth. This may have been caused by excessive role recursion", orig_exc=e)
for name in self.fattributes:
setattr(new_me, name, shallowcopy(getattr(self, f'_{name}', Sentinel)))
new_me._loader = self._loader
new_me._variable_manager = self._variable_manager
new_me._validated = self._validated
new_me._finalized = self._finalized
new_me._uuid = self._uuid
# if the ds value was set on the object, copy it to the new copy too
if hasattr(self, '_ds'):
new_me._ds = self._ds
return new_me
def get_validated_value(self, name, attribute, value, templar):
if attribute.isa == 'string':
value = to_text(value)
elif attribute.isa == 'int':
value = int(value)
elif attribute.isa == 'float':
value = float(value)
elif attribute.isa == 'bool':
value = boolean(value, strict=True)
elif attribute.isa == 'percent':
# special value, which may be an integer or float
# with an optional '%' at the end
if isinstance(value, string_types) and '%' in value:
value = value.replace('%', '')
value = float(value)
elif attribute.isa == 'list':
if value is None:
value = []
elif not isinstance(value, list):
value = [value]
if attribute.listof is not None:
for item in value:
if not isinstance(item, attribute.listof):
raise AnsibleParserError("the field '%s' should be a list of %s, "
"but the item '%s' is a %s" % (name, attribute.listof, item, type(item)), obj=self.get_ds())
elif attribute.required and attribute.listof == string_types:
if item is None or item.strip() == "":
raise AnsibleParserError("the field '%s' is required, and cannot have empty values" % (name,), obj=self.get_ds())
elif attribute.isa == 'set':
if value is None:
value = set()
elif not isinstance(value, (list, set)):
if isinstance(value, string_types):
value = value.split(',')
else:
# Making a list like this handles strings of
# text and bytes properly
value = [value]
if not isinstance(value, set):
value = set(value)
elif attribute.isa == 'dict':
if value is None:
value = dict()
elif not isinstance(value, dict):
raise TypeError("%s is not a dictionary" % value)
elif attribute.isa == 'class':
if not isinstance(value, attribute.class_type):
raise TypeError("%s is not a valid %s (got a %s instead)" % (name, attribute.class_type, type(value)))
value.post_validate(templar=templar)
return value
def set_to_context(self, name):
''' set to parent inherited value or Sentinel as appropriate'''
attribute = self.fattributes[name]
if isinstance(attribute, NonInheritableFieldAttribute):
# setting to sentinel will trigger 'default/default()' on getter
setattr(self, name, Sentinel)
else:
try:
setattr(self, name, self._get_parent_attribute(name, omit=True))
except AttributeError:
# mostly playcontext as only tasks/handlers/blocks really resolve parent
setattr(self, name, Sentinel)
def post_validate(self, templar):
'''
we can't tell that everything is of the right type until we have
all the variables. Run basic types (from isa) as well as
any _post_validate_<foo> functions.
'''
# save the omit value for later checking
omit_value = templar.available_variables.get('omit')
for (name, attribute) in self.fattributes.items():
if attribute.static:
value = getattr(self, name)
# we don't template 'vars' but allow template as values for later use
if name not in ('vars',) and templar.is_template(value):
display.warning('"%s" is not templatable, but we found: %s, '
'it will not be templated and will be used "as is".' % (name, value))
continue
if getattr(self, name) is None:
if not attribute.required:
continue
else:
raise AnsibleParserError("the field '%s' is required but was not set" % name)
elif not attribute.always_post_validate and self.__class__.__name__ not in ('Task', 'Handler', 'PlayContext'):
# Intermediate objects like Play() won't have their fields validated by
# default, as their values are often inherited by other objects and validated
# later, so we don't want them to fail out early
continue
try:
# Run the post-validator if present. These methods are responsible for
# using the given templar to template the values, if required.
method = getattr(self, '_post_validate_%s' % name, None)
if method:
value = method(attribute, getattr(self, name), templar)
elif attribute.isa == 'class':
value = getattr(self, name)
else:
# if the attribute contains a variable, template it now
value = templar.template(getattr(self, name))
# If this evaluated to the omit value, set the value back to inherited by context
# or default specified in the FieldAttribute and move on
if omit_value is not None and value == omit_value:
self.set_to_context(name)
continue
# and make sure the attribute is of the type it should be
if value is not None:
value = self.get_validated_value(name, attribute, value, templar)
# and assign the massaged value back to the attribute field
setattr(self, name, value)
except (TypeError, ValueError) as e:
value = getattr(self, name)
raise AnsibleParserError("the field '%s' has an invalid value (%s), and could not be converted to an %s."
"The error was: %s" % (name, value, attribute.isa, e), obj=self.get_ds(), orig_exc=e)
except (AnsibleUndefinedVariable, UndefinedError) as e:
if templar._fail_on_undefined_errors and name != 'name':
if name == 'args':
msg = "The task includes an option with an undefined variable. The error was: %s" % (to_native(e))
else:
msg = "The field '%s' has an invalid value, which includes an undefined variable. The error was: %s" % (name, to_native(e))
raise AnsibleParserError(msg, obj=self.get_ds(), orig_exc=e)
self._finalized = True
def _load_vars(self, attr, ds):
'''
Vars in a play can be specified either as a dictionary directly, or
as a list of dictionaries. If the later, this method will turn the
list into a single dictionary.
'''
def _validate_variable_keys(ds):
for key in ds:
if not isidentifier(key):
raise TypeError("'%s' is not a valid variable name" % key)
try:
if isinstance(ds, dict):
_validate_variable_keys(ds)
return combine_vars(self.vars, ds)
elif isinstance(ds, list):
display.deprecated(
(
'Specifying a list of dictionaries for vars is deprecated in favor of '
'specifying a dictionary.'
),
version='2.18'
)
all_vars = self.vars
for item in ds:
if not isinstance(item, dict):
raise ValueError
_validate_variable_keys(item)
all_vars = combine_vars(all_vars, item)
return all_vars
elif ds is None:
return {}
else:
raise ValueError
except ValueError as e:
raise AnsibleParserError("Vars in a %s must be specified as a dictionary" % self.__class__.__name__,
obj=ds, orig_exc=e)
except TypeError as e:
raise AnsibleParserError("Invalid variable name in vars specified for %s: %s" % (self.__class__.__name__, e), obj=ds, orig_exc=e)
def _extend_value(self, value, new_value, prepend=False):
'''
Will extend the value given with new_value (and will turn both
into lists if they are not so already). The values are run through
a set to remove duplicate values.
'''
if not isinstance(value, list):
value = [value]
if not isinstance(new_value, list):
new_value = [new_value]
# Due to where _extend_value may run for some attributes
# it is possible to end up with Sentinel in the list of values
# ensure we strip them
value = [v for v in value if v is not Sentinel]
new_value = [v for v in new_value if v is not Sentinel]
if prepend:
combined = new_value + value
else:
combined = value + new_value
return [i for i, _ in itertools.groupby(combined) if i is not None]
def dump_attrs(self):
'''
Dumps all attributes to a dictionary
'''
attrs = {}
for (name, attribute) in self.fattributes.items():
attr = getattr(self, name)
if attribute.isa == 'class' and hasattr(attr, 'serialize'):
attrs[name] = attr.serialize()
else:
attrs[name] = attr
return attrs
def from_attrs(self, attrs):
'''
Loads attributes from a dictionary
'''
for (attr, value) in attrs.items():
if attr in self.fattributes:
attribute = self.fattributes[attr]
if attribute.isa == 'class' and isinstance(value, dict):
obj = attribute.class_type()
obj.deserialize(value)
setattr(self, attr, obj)
else:
setattr(self, attr, value)
# from_attrs is only used to create a finalized task
# from attrs from the Worker/TaskExecutor
# Those attrs are finalized and squashed in the TE
# and controller side use needs to reflect that
self._finalized = True
self._squashed = True
def serialize(self):
'''
Serializes the object derived from the base object into
a dictionary of values. This only serializes the field
attributes for the object, so this may need to be overridden
for any classes which wish to add additional items not stored
as field attributes.
'''
repr = self.dump_attrs()
# serialize the uuid field
repr['uuid'] = self._uuid
repr['finalized'] = self._finalized
repr['squashed'] = self._squashed
return repr
def deserialize(self, data):
'''
Given a dictionary of values, load up the field attributes for
this object. As with serialize(), if there are any non-field
attribute data members, this method will need to be overridden
and extended.
'''
if not isinstance(data, dict):
raise AnsibleAssertionError('data (%s) should be a dict but is a %s' % (data, type(data)))
for (name, attribute) in self.fattributes.items():
if name in data:
setattr(self, name, data[name])
else:
self.set_to_context(name)
# restore the UUID field
setattr(self, '_uuid', data.get('uuid'))
self._finalized = data.get('finalized', False)
self._squashed = data.get('squashed', False)
class Base(FieldAttributeBase):
name = NonInheritableFieldAttribute(isa='string', default='', always_post_validate=True)
# connection/transport
connection = ConnectionFieldAttribute(isa='string', default=context.cliargs_deferred_get('connection'))
port = FieldAttribute(isa='int')
remote_user = FieldAttribute(isa='string', default=context.cliargs_deferred_get('remote_user'))
# variables
vars = NonInheritableFieldAttribute(isa='dict', priority=100, static=True)
# module default params
module_defaults = FieldAttribute(isa='list', extend=True, prepend=True)
# flags and misc. settings
environment = FieldAttribute(isa='list', extend=True, prepend=True)
no_log = FieldAttribute(isa='bool')
run_once = FieldAttribute(isa='bool')
ignore_errors = FieldAttribute(isa='bool')
ignore_unreachable = FieldAttribute(isa='bool')
check_mode = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('check'))
diff = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('diff'))
any_errors_fatal = FieldAttribute(isa='bool', default=C.ANY_ERRORS_FATAL)
throttle = FieldAttribute(isa='int', default=0)
timeout = FieldAttribute(isa='int', default=C.TASK_TIMEOUT)
# explicitly invoke a debugger on tasks
debugger = FieldAttribute(isa='string')
# Privilege escalation
become = FieldAttribute(isa='bool', default=context.cliargs_deferred_get('become'))
become_method = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_method'))
become_user = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_user'))
become_flags = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_flags'))
become_exe = FieldAttribute(isa='string', default=context.cliargs_deferred_get('become_exe'))
# used to hold sudo/su stuff
DEPRECATED_ATTRIBUTES = [] # type: list[str]
def get_path(self):
''' return the absolute path of the playbook object and its line number '''
path = ""
try:
path = "%s:%s" % (self._ds._data_source, self._ds._line_number)
except AttributeError:
try:
path = "%s:%s" % (self._parent._play._ds._data_source, self._parent._play._ds._line_number)
except AttributeError:
pass
return path
def get_dep_chain(self):
if hasattr(self, '_parent') and self._parent:
return self._parent.get_dep_chain()
else:
return None
def get_search_path(self):
'''
Return the list of paths you should search for files, in order.
This follows role/playbook dependency chain.
'''
path_stack = []
dep_chain = self.get_dep_chain()
# inside role: add the dependency chain from current to dependent
if dep_chain:
path_stack.extend(reversed([x._role_path for x in dep_chain if hasattr(x, '_role_path')]))
# add path of task itself, unless it is already in the list
task_dir = os.path.dirname(self.get_path())
if task_dir not in path_stack:
path_stack.append(task_dir)
return path_stack
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
lib/ansible/playbook/block.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ansible.constants as C
from ansible.errors import AnsibleParserError
from ansible.playbook.attribute import FieldAttribute, NonInheritableFieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.conditional import Conditional
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.helpers import load_list_of_tasks
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.utils.sentinel import Sentinel
class Block(Base, Conditional, CollectionSearch, Taggable):
# main block fields containing the task lists
block = NonInheritableFieldAttribute(isa='list', default=list)
rescue = NonInheritableFieldAttribute(isa='list', default=list)
always = NonInheritableFieldAttribute(isa='list', default=list)
# other fields for task compat
notify = FieldAttribute(isa='list')
delegate_to = FieldAttribute(isa='string')
delegate_facts = FieldAttribute(isa='bool')
# for future consideration? this would be functionally
# similar to the 'else' clause for exceptions
# otherwise = FieldAttribute(isa='list')
def __init__(self, play=None, parent_block=None, role=None, task_include=None, use_handlers=False, implicit=False):
self._play = play
self._role = role
self._parent = None
self._dep_chain = None
self._use_handlers = use_handlers
self._implicit = implicit
if task_include:
self._parent = task_include
elif parent_block:
self._parent = parent_block
super(Block, self).__init__()
def __repr__(self):
return "BLOCK(uuid=%s)(id=%s)(parent=%s)" % (self._uuid, id(self), self._parent)
def __eq__(self, other):
'''object comparison based on _uuid'''
return self._uuid == other._uuid
def __ne__(self, other):
'''object comparison based on _uuid'''
return self._uuid != other._uuid
def get_vars(self):
'''
Blocks do not store variables directly, however they may be a member
of a role or task include which does, so return those if present.
'''
all_vars = {}
if self._parent:
all_vars |= self._parent.get_vars()
all_vars |= self.vars.copy()
return all_vars
@staticmethod
def load(data, play=None, parent_block=None, role=None, task_include=None, use_handlers=False, variable_manager=None, loader=None):
implicit = not Block.is_block(data)
b = Block(play=play, parent_block=parent_block, role=role, task_include=task_include, use_handlers=use_handlers, implicit=implicit)
return b.load_data(data, variable_manager=variable_manager, loader=loader)
@staticmethod
def is_block(ds):
is_block = False
if isinstance(ds, dict):
for attr in ('block', 'rescue', 'always'):
if attr in ds:
is_block = True
break
return is_block
def preprocess_data(self, ds):
'''
If a simple task is given, an implicit block for that single task
is created, which goes in the main portion of the block
'''
if not Block.is_block(ds):
if isinstance(ds, list):
return super(Block, self).preprocess_data(dict(block=ds))
else:
return super(Block, self).preprocess_data(dict(block=[ds]))
return super(Block, self).preprocess_data(ds)
def _load_block(self, attr, ds):
try:
return load_list_of_tasks(
ds,
play=self._play,
block=self,
role=self._role,
task_include=None,
variable_manager=self._variable_manager,
loader=self._loader,
use_handlers=self._use_handlers,
)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading a block", obj=self._ds, orig_exc=e)
def _load_rescue(self, attr, ds):
try:
return load_list_of_tasks(
ds,
play=self._play,
block=self,
role=self._role,
task_include=None,
variable_manager=self._variable_manager,
loader=self._loader,
use_handlers=self._use_handlers,
)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading rescue.", obj=self._ds, orig_exc=e)
def _load_always(self, attr, ds):
try:
return load_list_of_tasks(
ds,
play=self._play,
block=self,
role=self._role,
task_include=None,
variable_manager=self._variable_manager,
loader=self._loader,
use_handlers=self._use_handlers,
)
except AssertionError as e:
raise AnsibleParserError("A malformed block was encountered while loading always", obj=self._ds, orig_exc=e)
def _validate_always(self, attr, name, value):
if value and not self.block:
raise AnsibleParserError("'%s' keyword cannot be used without 'block'" % name, obj=self._ds)
_validate_rescue = _validate_always
def get_dep_chain(self):
if self._dep_chain is None:
if self._parent:
return self._parent.get_dep_chain()
else:
return None
else:
return self._dep_chain[:]
def copy(self, exclude_parent=False, exclude_tasks=False):
def _dupe_task_list(task_list, new_block):
new_task_list = []
for task in task_list:
new_task = task.copy(exclude_parent=True)
if task._parent:
new_task._parent = task._parent.copy(exclude_tasks=True)
if task._parent == new_block:
# If task._parent is the same as new_block, just replace it
new_task._parent = new_block
else:
# task may not be a direct child of new_block, search for the correct place to insert new_block
cur_obj = new_task._parent
while cur_obj._parent and cur_obj._parent != new_block:
cur_obj = cur_obj._parent
cur_obj._parent = new_block
else:
new_task._parent = new_block
new_task_list.append(new_task)
return new_task_list
new_me = super(Block, self).copy()
new_me._play = self._play
new_me._use_handlers = self._use_handlers
if self._dep_chain is not None:
new_me._dep_chain = self._dep_chain[:]
new_me._parent = None
if self._parent and not exclude_parent:
new_me._parent = self._parent.copy(exclude_tasks=True)
if not exclude_tasks:
new_me.block = _dupe_task_list(self.block or [], new_me)
new_me.rescue = _dupe_task_list(self.rescue or [], new_me)
new_me.always = _dupe_task_list(self.always or [], new_me)
new_me._role = None
if self._role:
new_me._role = self._role
new_me.validate()
return new_me
def serialize(self):
'''
Override of the default serialize method, since when we're serializing
a task we don't want to include the attribute list of tasks.
'''
data = dict()
for attr in self.fattributes:
if attr not in ('block', 'rescue', 'always'):
data[attr] = getattr(self, attr)
data['dep_chain'] = self.get_dep_chain()
if self._role is not None:
data['role'] = self._role.serialize()
if self._parent is not None:
data['parent'] = self._parent.copy(exclude_tasks=True).serialize()
data['parent_type'] = self._parent.__class__.__name__
return data
def deserialize(self, data):
'''
Override of the default deserialize method, to match the above overridden
serialize method
'''
# import is here to avoid import loops
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.handler_task_include import HandlerTaskInclude
# we don't want the full set of attributes (the task lists), as that
# would lead to a serialize/deserialize loop
for attr in self.fattributes:
if attr in data and attr not in ('block', 'rescue', 'always'):
setattr(self, attr, data.get(attr))
self._dep_chain = data.get('dep_chain', None)
# if there was a serialized role, unpack it too
role_data = data.get('role')
if role_data:
r = Role()
r.deserialize(role_data)
self._role = r
parent_data = data.get('parent')
if parent_data:
parent_type = data.get('parent_type')
if parent_type == 'Block':
p = Block()
elif parent_type == 'TaskInclude':
p = TaskInclude()
elif parent_type == 'HandlerTaskInclude':
p = HandlerTaskInclude()
p.deserialize(parent_data)
self._parent = p
self._dep_chain = self._parent.get_dep_chain()
def set_loader(self, loader):
self._loader = loader
if self._parent:
self._parent.set_loader(loader)
elif self._role:
self._role.set_loader(loader)
dep_chain = self.get_dep_chain()
if dep_chain:
for dep in dep_chain:
dep.set_loader(loader)
def _get_parent_attribute(self, attr, omit=False):
'''
Generic logic to get the attribute or parent attribute for a block value.
'''
extend = self.fattributes.get(attr).extend
prepend = self.fattributes.get(attr).prepend
try:
# omit self, and only get parent values
if omit:
value = Sentinel
else:
value = getattr(self, f'_{attr}', Sentinel)
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
if getattr(self._parent, 'statically_loaded', True):
_parent = self._parent
else:
_parent = self._parent._parent
if _parent and (value is Sentinel or extend):
try:
if getattr(_parent, 'statically_loaded', True):
if hasattr(_parent, '_get_parent_attribute'):
parent_value = _parent._get_parent_attribute(attr)
else:
parent_value = getattr(_parent, f'_{attr}', Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
except AttributeError:
pass
if self._role and (value is Sentinel or extend):
try:
parent_value = getattr(self._role, f'_{attr}', Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
dep_chain = self.get_dep_chain()
if dep_chain and (value is Sentinel or extend):
dep_chain.reverse()
for dep in dep_chain:
dep_value = getattr(dep, f'_{attr}', Sentinel)
if extend:
value = self._extend_value(value, dep_value, prepend)
else:
value = dep_value
if value is not Sentinel and not extend:
break
except AttributeError:
pass
if self._play and (value is Sentinel or extend):
try:
play_value = getattr(self._play, f'_{attr}', Sentinel)
if play_value is not Sentinel:
if extend:
value = self._extend_value(value, play_value, prepend)
else:
value = play_value
except AttributeError:
pass
except KeyError:
pass
return value
def filter_tagged_tasks(self, all_vars):
'''
Creates a new block, with task lists filtered based on the tags.
'''
def evaluate_and_append_task(target):
tmp_list = []
for task in target:
if isinstance(task, Block):
filtered_block = evaluate_block(task)
if filtered_block.has_tasks():
tmp_list.append(filtered_block)
elif ((task.action in C._ACTION_META and task.implicit) or
(task.action in C._ACTION_INCLUDE and task.evaluate_tags([], self._play.skip_tags, all_vars=all_vars)) or
task.evaluate_tags(self._play.only_tags, self._play.skip_tags, all_vars=all_vars)):
tmp_list.append(task)
return tmp_list
def evaluate_block(block):
new_block = block.copy(exclude_parent=True, exclude_tasks=True)
new_block._parent = block._parent
new_block.block = evaluate_and_append_task(block.block)
new_block.rescue = evaluate_and_append_task(block.rescue)
new_block.always = evaluate_and_append_task(block.always)
return new_block
return evaluate_block(self)
def get_tasks(self):
def evaluate_and_append_task(target):
tmp_list = []
for task in target:
if isinstance(task, Block):
tmp_list.extend(evaluate_block(task))
else:
tmp_list.append(task)
return tmp_list
def evaluate_block(block):
rv = evaluate_and_append_task(block.block)
rv.extend(evaluate_and_append_task(block.rescue))
rv.extend(evaluate_and_append_task(block.always))
return rv
return evaluate_block(self)
def has_tasks(self):
return len(self.block) > 0 or len(self.rescue) > 0 or len(self.always) > 0
def get_include_params(self):
if self._parent:
return self._parent.get_include_params()
else:
return dict()
def all_parents_static(self):
'''
Determine if all of the parents of this block were statically loaded
or not. Since Task/TaskInclude objects may be in the chain, they simply
call their parents all_parents_static() method. Only Block objects in
the chain check the statically_loaded value of the parent.
'''
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude) and not self._parent.statically_loaded:
return False
return self._parent.all_parents_static()
return True
def get_first_parent_include(self):
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude):
return self._parent
return self._parent.get_first_parent_include()
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
lib/ansible/playbook/task.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import string_types
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
from ansible.plugins.loader import lookup_loader
from ansible.playbook.attribute import FieldAttribute, NonInheritableFieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.block import Block
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.conditional import Conditional
from ansible.playbook.loop_control import LoopControl
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
__all__ = ['Task']
display = Display()
class Task(Base, Conditional, Taggable, CollectionSearch):
"""
A task is a language feature that represents a call to a module, with given arguments and other parameters.
A handler is a subclass of a task.
Usage:
Task.load(datastructure) -> Task
Task.something(...)
"""
# =================================================================================
# ATTRIBUTES
# load_<attribute_name> and
# validate_<attribute_name>
# will be used if defined
# might be possible to define others
# NOTE: ONLY set defaults on task attributes that are not inheritable,
# inheritance is only triggered if the 'current value' is Sentinel,
# default can be set at play/top level object and inheritance will take it's course.
args = FieldAttribute(isa='dict', default=dict)
action = FieldAttribute(isa='string')
async_val = FieldAttribute(isa='int', default=0, alias='async')
changed_when = FieldAttribute(isa='list', default=list)
delay = FieldAttribute(isa='int', default=5)
delegate_to = FieldAttribute(isa='string')
delegate_facts = FieldAttribute(isa='bool')
failed_when = FieldAttribute(isa='list', default=list)
loop = FieldAttribute()
loop_control = NonInheritableFieldAttribute(isa='class', class_type=LoopControl, default=LoopControl)
notify = FieldAttribute(isa='list')
poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL)
register = FieldAttribute(isa='string', static=True)
retries = FieldAttribute(isa='int', default=3)
until = FieldAttribute(isa='list', default=list)
# deprecated, used to be loop and loop_args but loop has been repurposed
loop_with = NonInheritableFieldAttribute(isa='string', private=True)
def __init__(self, block=None, role=None, task_include=None):
''' constructors a task, without the Task.load classmethod, it will be pretty blank '''
self._role = role
self._parent = None
self.implicit = False
self.resolved_action = None
if task_include:
self._parent = task_include
else:
self._parent = block
super(Task, self).__init__()
def get_name(self, include_role_fqcn=True):
''' return the name of the task '''
if self._role:
role_name = self._role.get_name(include_role_fqcn=include_role_fqcn)
if self._role and self.name:
return "%s : %s" % (role_name, self.name)
elif self.name:
return self.name
else:
if self._role:
return "%s : %s" % (role_name, self.action)
else:
return "%s" % (self.action,)
def _merge_kv(self, ds):
if ds is None:
return ""
elif isinstance(ds, string_types):
return ds
elif isinstance(ds, dict):
buf = ""
for (k, v) in ds.items():
if k.startswith('_'):
continue
buf = buf + "%s=%s " % (k, v)
buf = buf.strip()
return buf
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = Task(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
def __repr__(self):
''' returns a human readable representation of the task '''
if self.get_name() in C._ACTION_META:
return "TASK: meta (%s)" % self.args['_raw_params']
else:
return "TASK: %s" % self.get_name()
def _preprocess_with_loop(self, ds, new_ds, k, v):
''' take a lookup plugin name and store it correctly '''
loop_name = k.removeprefix("with_")
if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None:
raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds)
if v is None:
raise AnsibleError("you must specify a value when using %s" % k, obj=ds)
new_ds['loop_with'] = loop_name
new_ds['loop'] = v
# display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead",
# version="2.10", collection_name='ansible.builtin')
def preprocess_data(self, ds):
'''
tasks are especially complex arguments so need pre-processing.
keep it short.
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds)))
# the new, cleaned datastructure, which will have legacy
# items reduced to a standard structure suitable for the
# attributes of the task class
new_ds = AnsibleMapping()
if isinstance(ds, AnsibleBaseYAMLObject):
new_ds.ansible_pos = ds.ansible_pos
# since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator
default_collection = AnsibleCollectionConfig.default_collection
collections_list = ds.get('collections')
if collections_list is None:
# use the parent value if our ds doesn't define it
collections_list = self.collections
else:
# Validate this untemplated field early on to guarantee we are dealing with a list.
# This is also done in CollectionSearch._load_collections() but this runs before that call.
collections_list = self.get_validated_value('collections', self.fattributes.get('collections'), collections_list, None)
if default_collection and not self._role: # FIXME: and not a collections role
if collections_list:
if default_collection not in collections_list:
collections_list.insert(0, default_collection)
else:
collections_list = [default_collection]
if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list:
collections_list.append('ansible.legacy')
if collections_list:
ds['collections'] = collections_list
# use the args parsing class to determine the action, args,
# and the delegate_to value from the various possible forms
# supported as legacy
args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list)
try:
(action, args, delegate_to) = args_parser.parse()
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e.obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e)
else:
self.resolved_action = args_parser.resolved_action
# the command/shell/script modules used to support the `cmd` arg,
# which corresponds to what we now call _raw_params, so move that
# value over to _raw_params (assuming it is empty)
if action in C._ACTION_HAS_CMD:
if 'cmd' in args:
if args.get('_raw_params', '') != '':
raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified."
" Please put everything in one or the other place.", obj=ds)
args['_raw_params'] = args.pop('cmd')
new_ds['action'] = action
new_ds['args'] = args
new_ds['delegate_to'] = delegate_to
# we handle any 'vars' specified in the ds here, as we may
# be adding things to them below (special handling for includes).
# When that deprecated feature is removed, this can be too.
if 'vars' in ds:
# _load_vars is defined in Base, and is used to load a dictionary
# or list of dictionaries in a standard way
new_ds['vars'] = self._load_vars(None, ds.get('vars'))
else:
new_ds['vars'] = dict()
for (k, v) in ds.items():
if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell':
# we don't want to re-assign these values, which were determined by the ModuleArgsParser() above
continue
elif k.startswith('with_') and k.removeprefix("with_") in lookup_loader:
# transform into loop property
self._preprocess_with_loop(ds, new_ds, k, v)
elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self._valid_attrs:
new_ds[k] = v
else:
display.warning("Ignoring invalid attribute: %s" % k)
return super(Task, self).preprocess_data(new_ds)
def _load_loop_control(self, attr, ds):
if not isinstance(ds, dict):
raise AnsibleParserError(
"the `loop_control` value must be specified as a dictionary and cannot "
"be a variable itself (though it can contain variables)",
obj=ds,
)
return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader)
def _validate_attributes(self, ds):
try:
super(Task, self)._validate_attributes(ds)
except AnsibleParserError as e:
e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration'
raise e
def _validate_changed_when(self, attr, name, value):
if not isinstance(value, list):
setattr(self, name, [value])
def _validate_failed_when(self, attr, name, value):
if not isinstance(value, list):
setattr(self, name, [value])
def post_validate(self, templar):
'''
Override of base class post_validate, to also do final validation on
the block and task include (if any) to which this task belongs.
'''
if self._parent:
self._parent.post_validate(templar)
if AnsibleCollectionConfig.default_collection:
pass
super(Task, self).post_validate(templar)
def _post_validate_loop(self, attr, value, templar):
'''
Override post validation for the loop field, which is templated
specially in the TaskExecutor class when evaluating loops.
'''
return value
def _post_validate_environment(self, attr, value, templar):
'''
Override post validation of vars on the play, as we don't want to
template these too early.
'''
env = {}
if value is not None:
def _parse_env_kv(k, v):
try:
env[k] = templar.template(v, convert_bare=False)
except AnsibleUndefinedVariable as e:
error = to_native(e)
if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error:
# ignore as fact gathering is required for 'env' facts
return
raise
if isinstance(value, list):
for env_item in value:
if isinstance(env_item, dict):
for k in env_item:
_parse_env_kv(k, env_item[k])
else:
isdict = templar.template(env_item, convert_bare=False)
if isinstance(isdict, dict):
env |= isdict
else:
display.warning("could not parse environment value, skipping: %s" % value)
elif isinstance(value, dict):
# should not really happen
env = dict()
for env_item in value:
_parse_env_kv(env_item, value[env_item])
else:
# at this point it should be a simple string, also should not happen
env = templar.template(value, convert_bare=False)
return env
def _post_validate_changed_when(self, attr, value, templar):
'''
changed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_failed_when(self, attr, value, templar):
'''
failed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_until(self, attr, value, templar):
'''
until is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def get_vars(self):
all_vars = dict()
if self._parent:
all_vars |= self._parent.get_vars()
all_vars |= self.vars
if 'tags' in all_vars:
del all_vars['tags']
if 'when' in all_vars:
del all_vars['when']
return all_vars
def get_include_params(self):
all_vars = dict()
if self._parent:
all_vars |= self._parent.get_include_params()
if self.action in C._ACTION_ALL_INCLUDES:
all_vars |= self.vars
return all_vars
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(Task, self).copy()
new_me._parent = None
if self._parent and not exclude_parent:
new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks)
new_me._role = None
if self._role:
new_me._role = self._role
new_me.implicit = self.implicit
new_me.resolved_action = self.resolved_action
new_me._uuid = self._uuid
return new_me
def serialize(self):
data = super(Task, self).serialize()
if not self._squashed and not self._finalized:
if self._parent:
data['parent'] = self._parent.serialize()
data['parent_type'] = self._parent.__class__.__name__
if self._role:
data['role'] = self._role.serialize()
data['implicit'] = self.implicit
data['resolved_action'] = self.resolved_action
return data
def deserialize(self, data):
# import is here to avoid import loops
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.handler_task_include import HandlerTaskInclude
parent_data = data.get('parent', None)
if parent_data:
parent_type = data.get('parent_type')
if parent_type == 'Block':
p = Block()
elif parent_type == 'TaskInclude':
p = TaskInclude()
elif parent_type == 'HandlerTaskInclude':
p = HandlerTaskInclude()
p.deserialize(parent_data)
self._parent = p
del data['parent']
role_data = data.get('role')
if role_data:
r = Role()
r.deserialize(role_data)
self._role = r
del data['role']
self.implicit = data.get('implicit', False)
self.resolved_action = data.get('resolved_action')
super(Task, self).deserialize(data)
def set_loader(self, loader):
'''
Sets the loader on this object and recursively on parent, child objects.
This is used primarily after the Task has been serialized/deserialized, which
does not preserve the loader.
'''
self._loader = loader
if self._parent:
self._parent.set_loader(loader)
def _get_parent_attribute(self, attr, omit=False):
'''
Generic logic to get the attribute or parent attribute for a task value.
'''
extend = self.fattributes.get(attr).extend
prepend = self.fattributes.get(attr).prepend
try:
# omit self, and only get parent values
if omit:
value = Sentinel
else:
value = getattr(self, f'_{attr}', Sentinel)
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
if getattr(self._parent, 'statically_loaded', True):
_parent = self._parent
else:
_parent = self._parent._parent
if _parent and (value is Sentinel or extend):
if getattr(_parent, 'statically_loaded', True):
# vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors
if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'):
parent_value = _parent._get_parent_attribute(attr)
else:
parent_value = getattr(_parent, f'_{attr}', Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
except KeyError:
pass
return value
def all_parents_static(self):
if self._parent:
return self._parent.all_parents_static()
return True
def get_first_parent_include(self):
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude):
return self._parent
return self._parent.get_first_parent_include()
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
test/integration/targets/keyword_inheritance/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
test/integration/targets/keyword_inheritance/roles/whoami/tasks/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
test/integration/targets/keyword_inheritance/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
test/integration/targets/keyword_inheritance/test.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,021 |
become not applied on roles
|
### Summary
Become is not applied on roles when playbook has `become: true` set.
### Issue Type
Bug Report
### Component Name
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/twouters/ansible-2.14/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /home/twouters//ansible-2.14/bin/ansible
python version = 3.10.7 (main, Sep 6 2022, 21:22:27) [GCC 12.2.0] (/home/twouters/ansible-2.14/bin/python)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Arch Linux and Debian but probably irrelevant
### Steps to Reproduce
```
$ tree -Ap
[drwx------] .
├── [drwxr-xr-x] testrole
│ └── [drwxr-xr-x] tasks
│ └── [-rw-r--r--] main.yml
└── [-rw-r--r--] test.yml
2 directories, 2 files
```
```
$ cat test.yml
---
- hosts: localhost
become: true
gather_facts: no
pre_tasks:
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
roles:
- testrole
```
```
$ cat testrole/tasks/main.yml
---
- ansible.builtin.command: whoami
register: whoami
- ansible.builtin.assert:
that: whoami.stdout == "root"
```
### Expected Results
All assertions pass and become is applied to all tasks
```
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=4 changed=2 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
[WARNING]: No inventory was parsed, only implicit localhost is available
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost
does not match 'all'
PLAY [localhost] ***************************************************************************************
TASK [ansible.builtin.command] *************************************************************************
changed: [localhost]
TASK [ansible.builtin.assert] **************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "All assertions passed"
}
TASK [testrole : ansible.builtin.command] **************************************************************
changed: [localhost]
TASK [testrole : ansible.builtin.assert] ***************************************************************
fatal: [localhost]: FAILED! => {
"assertion": "whoami.stdout == \"root\"",
"changed": false,
"evaluated_to": false,
"msg": "Assertion failed"
}
PLAY RECAP *********************************************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79021
|
https://github.com/ansible/ansible/pull/79049
|
11c1777d56664b1acb56b387a1ad6aeadef1391d
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
| 2022-10-04T14:40:56Z |
python
| 2022-10-06T13:55:56Z |
test/integration/targets/omit/75692.yml
|
- name: omit should reset to 'absent' or same context, not just 'default' value
hosts: testhost
gather_facts: false
become: yes
become_user: nobody
roles:
- name: setup_test_user
tasks:
- shell: whoami
register: inherited
- shell: whoami
register: explicit_no
become: false
- shell: whoami
register: omited_inheritance
become: '{{ omit }}'
- shell: whoami
register: explicit_yes
become: yes
- name: ensure omit works with inheritance
assert:
that:
- inherited.stdout == omited_inheritance.stdout
- inherited.stdout == explicit_yes.stdout
- inherited.stdout != explicit_no.stdout
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,442 |
ansible galaxy install collection from git replaces directory symlinks with empty dir
|
### Summary
When a path in a collection is a symlink to a directory, the symlink is replaced with an empty dir instead of either a copy of the symlink target or the original symlink.
collection source:
```
$ tree -Ap --noreport ~/my_namespace/testcollection/collection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
collection after installation (from git source):
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [drwxr-xr-x] 7.4
```
role `meta/requirements.yml` contains the following:
```
collections:
- name: my_namespace.testcollection
source: git+file:///home/twouters/my_namespace/testcollection#/collection/
version: 2022.31.0-coll5
type: git
```
contents of collection tar.gz after `ansible-galaxy collection build` seems to still be correct:
```
$ tar -tvf my_namespace-testcollection-2022.31.0-coll5.tar.gz | grep -F roles/php/templates
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/
lrw-r--r-- 0/0 0 1970-01-01 01:00 roles/php/templates/etc/php/7.4 -> 7.3
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/
-rw-r--r-- 0/0 430 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/www.conf.j2
```
installation from tar.gz with `ansible-galaxy collection install --force my_namespace-testcollection-2022.31.0-coll5.tar.gz` is fine:
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.5 (main, Aug 1 2022, 07:53:20) [GCC 12.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
$
```
### OS / Environment
Arch Linux
### Steps to Reproduce
See summary
### Expected Results
See summary. I expect to either have a copy of the symlink target or preferably the original symlink.
### Actual Results
```console
See summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78442
|
https://github.com/ansible/ansible/pull/78983
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
|
8cf7a0d3f0fc71bb148ceb6501851b39fe6a6f68
| 2022-08-04T12:56:59Z |
python
| 2022-10-06T17:53:23Z |
changelogs/fragments/78983-fix-collection-install-from-source-respects-dir-symlinks.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,442 |
ansible galaxy install collection from git replaces directory symlinks with empty dir
|
### Summary
When a path in a collection is a symlink to a directory, the symlink is replaced with an empty dir instead of either a copy of the symlink target or the original symlink.
collection source:
```
$ tree -Ap --noreport ~/my_namespace/testcollection/collection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
collection after installation (from git source):
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [drwxr-xr-x] 7.4
```
role `meta/requirements.yml` contains the following:
```
collections:
- name: my_namespace.testcollection
source: git+file:///home/twouters/my_namespace/testcollection#/collection/
version: 2022.31.0-coll5
type: git
```
contents of collection tar.gz after `ansible-galaxy collection build` seems to still be correct:
```
$ tar -tvf my_namespace-testcollection-2022.31.0-coll5.tar.gz | grep -F roles/php/templates
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/
lrw-r--r-- 0/0 0 1970-01-01 01:00 roles/php/templates/etc/php/7.4 -> 7.3
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/
-rw-r--r-- 0/0 430 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/www.conf.j2
```
installation from tar.gz with `ansible-galaxy collection install --force my_namespace-testcollection-2022.31.0-coll5.tar.gz` is fine:
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.5 (main, Aug 1 2022, 07:53:20) [GCC 12.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
$
```
### OS / Environment
Arch Linux
### Steps to Reproduce
See summary
### Expected Results
See summary. I expect to either have a copy of the symlink target or preferably the original symlink.
### Actual Results
```console
See summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78442
|
https://github.com/ansible/ansible/pull/78983
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
|
8cf7a0d3f0fc71bb148ceb6501851b39fe6a6f68
| 2022-08-04T12:56:59Z |
python
| 2022-10-06T17:53:23Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils.six import raise_from
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise_from(AnsibleError(to_native(lookup_err)), lookup_err)
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if not signatures and not collection.signature_sources:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns, manifest_control):
# type: (bytes, str, str, list[str], dict[str, t.Any]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control):
# type: (bytes, str, str, dict[str, t.Any]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst COPYING LICENSE',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt',
'recursive-include roles **.yml **.yaml **.json **.j2',
'recursive-include playbooks **.yml **.yaml **.json',
'recursive-include changelogs **.yml **.yaml',
'recursive-include plugins */**.py',
])
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
if os.path.isdir(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
shutil.copyfile(src_file, dest_file)
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def find_existing_collections(path, artifacts_manager):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
b_path = to_bytes(path, errors='surrogate_or_strict')
# FIXME: consider using `glob.glob()` to simplify looping
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
# FIXME: consider feeding b_namespace_path to Candidate.from_dir_path to get subdirs automatically
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if not os.path.isdir(b_collection_path):
continue
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
raise_from(AnsibleError(val_err), val_err)
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,442 |
ansible galaxy install collection from git replaces directory symlinks with empty dir
|
### Summary
When a path in a collection is a symlink to a directory, the symlink is replaced with an empty dir instead of either a copy of the symlink target or the original symlink.
collection source:
```
$ tree -Ap --noreport ~/my_namespace/testcollection/collection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
collection after installation (from git source):
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [drwxr-xr-x] 7.4
```
role `meta/requirements.yml` contains the following:
```
collections:
- name: my_namespace.testcollection
source: git+file:///home/twouters/my_namespace/testcollection#/collection/
version: 2022.31.0-coll5
type: git
```
contents of collection tar.gz after `ansible-galaxy collection build` seems to still be correct:
```
$ tar -tvf my_namespace-testcollection-2022.31.0-coll5.tar.gz | grep -F roles/php/templates
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/
lrw-r--r-- 0/0 0 1970-01-01 01:00 roles/php/templates/etc/php/7.4 -> 7.3
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/
drwxr-xr-x 0/0 0 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/
-rw-r--r-- 0/0 430 2022-08-03 11:31 roles/php/templates/etc/php/7.3/fpm/pool.d/www.conf.j2
```
installation from tar.gz with `ansible-galaxy collection install --force my_namespace-testcollection-2022.31.0-coll5.tar.gz` is fine:
```
$ tree -Ap --noreport ~/.ansible/collections/ansible_collections/my_namespace/testcollection/roles/php/templates/etc/php
[drwxr-xr-x] roles/php/templates/etc/php
├── [drwxr-xr-x] 7.3
│ └── [drwxr-xr-x] fpm
│ └── [drwxr-xr-x] pool.d
│ └── [-rw-r--r--] www.conf.j2
└── [lrwxrwxrwx] 7.4 -> 7.3
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.2]
config file = /home/twouters/.ansible.cfg
configured module search path = ['/home/twouters/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/twouters/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.5 (main, Aug 1 2022, 07:53:20) [GCC 12.1.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
$
```
### OS / Environment
Arch Linux
### Steps to Reproduce
See summary
### Expected Results
See summary. I expect to either have a copy of the symlink target or preferably the original symlink.
### Actual Results
```console
See summary
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78442
|
https://github.com/ansible/ansible/pull/78983
|
420564c5bcf752a821ae0599c3bd01ffba40f3ea
|
8cf7a0d3f0fc71bb148ceb6501851b39fe6a6f68
| 2022-08-04T12:56:59Z |
python
| 2022-10-06T17:53:23Z |
test/integration/targets/ansible-galaxy-collection/tasks/install.yml
|
---
- name: create test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: directory
- name: install simple collection from first accessible server
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install simple collection with implicit path - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_normal
- name: get installed files of install simple collection with implicit path - {{ test_id }}
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection with implicit path - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection with implicit path - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_normal.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: install existing without --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_no_force
- name: assert install existing without --force - {{ test_id }}
assert:
that:
- '"Nothing to do. All requested collections are already installed" in install_existing_no_force.stdout'
- name: install existing with --force - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1 -s '{{ test_name }}' --force {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
register: install_existing_force
- name: assert install existing with --force - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_existing_force.stdout'
- name: remove test installed collection - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: install pre-release as explicit version to custom dir - {{ test_id }}
command: ansible-galaxy collection install 'namespace1.name1:1.1.0-beta.1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release as explicit version to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release as explicit version to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: Remove beta
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
- name: install pre-release version with --pre to custom dir - {{ test_id }}
command: ansible-galaxy collection install --pre 'namespace1.name1' -s '{{ test_name }}' -p '{{ galaxy_dir }}/ansible_collections' {{ galaxy_verbosity }}
register: install_prerelease
- name: get result of install pre-release version with --pre to custom dir - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_prerelease_actual
- name: assert install pre-release version with --pre to custom dir - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_prerelease.stdout'
- (install_prerelease_actual.content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- name: install multiple collections with dependencies - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection:1.0.0 namespace2.name -s {{ test_name }} {{ galaxy_verbosity }}
args:
chdir: '{{ galaxy_dir }}/ansible_collections'
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_CONFIG: '{{ galaxy_dir }}/ansible.cfg'
register: install_multiple_with_dep
- name: get result of install multiple collections with dependencies - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection.namespace }}/{{ collection.name }}/MANIFEST.json'
register: install_multiple_with_dep_actual
loop_control:
loop_var: collection
loop:
- namespace: namespace2
name: name
- namespace: parent_dep
name: parent_collection
- namespace: child_dep
name: child_collection
- namespace: child_dep
name: child_dep2
- name: assert install multiple collections with dependencies - {{ test_id }}
assert:
that:
- (install_multiple_with_dep_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_multiple_with_dep_actual.results[2].content | b64decode | from_json).collection_info.version == '0.9.9'
- (install_multiple_with_dep_actual.results[3].content | b64decode | from_json).collection_info.version == '1.2.2'
- name: expect failure with dep resolution failure - {{ test_id }}
command: ansible-galaxy collection install fail_namespace.fail_collection -s {{ test_name }} {{ galaxy_verbosity }}
register: fail_dep_mismatch
failed_when:
- '"Could not satisfy the following requirements" not in fail_dep_mismatch.stderr'
- '" fail_dep2.name:<0.0.5 (dependency of fail_namespace.fail_collection:2.1.2)" not in fail_dep_mismatch.stderr'
- name: Find artifact url for namespace3.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace3/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: download a collection for an offline install - {{ test_id }}
get_url:
url: '{{ artifact_url_response.json.download_url }}'
dest: '{{ galaxy_dir }}/namespace3.tar.gz'
- name: install a collection from a tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/namespace3.tar.gz' {{ galaxy_verbosity }}
register: install_tarball
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a tarball - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace3/name/MANIFEST.json'
register: install_tarball_actual
- name: assert install a collection from a tarball - {{ test_id }}
assert:
that:
- '"Installing ''namespace3.name:1.0.0'' to" in install_tarball.stdout'
- (install_tarball_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: write a requirements file using the artifact and a conflicting version
copy:
content: |
collections:
- name: {{ galaxy_dir }}/namespace3.tar.gz
version: 1.2.0
dest: '{{ galaxy_dir }}/test_req.yml'
- name: install the requirements file with mismatched versions
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/test_req.yml' {{ galaxy_verbosity }}
ignore_errors: True
register: result
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: remove the requirements file
file:
path: '{{ galaxy_dir }}/test_req.yml'
state: absent
- assert:
that: error == expected_error
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (direct request)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
- name: test error for mismatched dependency versions
vars:
error: "{{ result.stderr | regex_replace('\\n', ' ') }}"
expected_error: >-
ERROR! Failed to resolve the requested dependencies map.
Got the candidate namespace3.name:1.0.0 (dependency of tmp_parent.name:1.0.0)
which didn't satisfy all of the following requirements:
* namespace3.name:1.2.0
environment:
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
block:
- name: init a new parent collection
command: ansible-galaxy collection init tmp_parent.name --init-path '{{ galaxy_dir }}/scratch'
- name: replace the dependencies
lineinfile:
path: "{{ galaxy_dir }}/scratch/tmp_parent/name/galaxy.yml"
regexp: "^dependencies:*"
line: "dependencies: { '{{ galaxy_dir }}/namespace3.tar.gz': '1.2.0' }"
- name: build the new artifact
command: ansible-galaxy collection build {{ galaxy_dir }}/scratch/tmp_parent/name
args:
chdir: "{{ galaxy_dir }}"
- name: install the artifact to verify the error is handled
command: ansible-galaxy collection install '{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz'
ignore_errors: yes
register: result
- debug: msg="Actual - {{ error }}"
- debug: msg="Expected - {{ expected_error }}"
- assert:
that: error == expected_error
always:
- name: clean up collection skeleton and artifact
file:
state: absent
path: "{{ item }}"
loop:
- "{{ galaxy_dir }}/scratch/tmp_parent/"
- "{{ galaxy_dir }}/tmp_parent-name-1.0.0.tar.gz"
- name: setup bad tarball - {{ test_id }}
script: build_bad_tar.py {{ galaxy_dir | quote }}
- name: fail to install a collection from a bad tarball - {{ test_id }}
command: ansible-galaxy collection install '{{ galaxy_dir }}/suspicious-test-1.0.0.tar.gz' {{ galaxy_verbosity }}
register: fail_bad_tar
failed_when: fail_bad_tar.rc != 1 and "Cannot extract tar entry '../../outside.sh' as it will be placed outside the collection directory" not in fail_bad_tar.stderr
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of failed collection install - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections\suspicious'
register: fail_bad_tar_actual
- name: assert result of failed collection install - {{ test_id }}
assert:
that:
- not fail_bad_tar_actual.stat.exists
- name: Find artifact url for namespace4.name
uri:
url: '{{ test_server }}{{ vX }}collections/namespace4/name/versions/1.0.0/'
user: '{{ pulp_user }}'
password: '{{ pulp_password }}'
force_basic_auth: true
register: artifact_url_response
- name: install a collection from a URI - {{ test_id }}
command: ansible-galaxy collection install {{ artifact_url_response.json.download_url}} {{ galaxy_verbosity }}
register: install_uri
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collection from a URI - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace4/name/MANIFEST.json'
register: install_uri_actual
- name: assert install a collection from a URI - {{ test_id }}
assert:
that:
- '"Installing ''namespace4.name:1.0.0'' to" in install_uri.stdout'
- (install_uri_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: fail to install a collection with an undefined URL - {{ test_id }}
command: ansible-galaxy collection install namespace5.name {{ galaxy_verbosity }}
register: fail_undefined_server
failed_when: '"No setting was provided for required configuration plugin_type: galaxy_server plugin: undefined" not in fail_undefined_server.stderr'
environment:
ANSIBLE_GALAXY_SERVER_LIST: undefined
- when: not requires_auth
block:
- name: install a collection with an empty server list - {{ test_id }}
command: ansible-galaxy collection install namespace5.name -s '{{ test_server }}' {{ galaxy_verbosity }}
register: install_empty_server_list
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_SERVER_LIST: ''
- name: get result of a collection with an empty server list - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace5/name/MANIFEST.json'
register: install_empty_server_list_actual
- name: assert install a collection with an empty server list - {{ test_id }}
assert:
that:
- '"Installing ''namespace5.name:1.0.0'' to" in install_empty_server_list.stdout'
- (install_empty_server_list_actual.content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with both roles and collections - {{ test_id }}
copy:
content: |
collections:
- namespace6.name
- name: namespace7.name
roles:
- skip.me
dest: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
- name: install roles from requirements file with collection-only keyring option
command: ansible-galaxy role install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }}
vars:
req_file: '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml'
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_opt
- assert:
that:
- invalid_opt is failed
- "'unrecognized arguments: --keyring' in invalid_opt.stderr"
# Need to run with -vvv to validate the roles will be skipped msg
- name: install collections only with requirements-with-role.yml - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/ansible_collections/requirements-with-role.yml' -s '{{ test_name }}' -vvv
register: install_req_collection
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections only with requirements-with-roles.yml - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_collection_actual
loop_control:
loop_var: collection
loop:
- namespace6
- namespace7
- name: assert install collections only with requirements-with-role.yml - {{ test_id }}
assert:
that:
- '"contains roles which will be ignored" in install_req_collection.stdout'
- '"Installing ''namespace6.name:1.0.0'' to" in install_req_collection.stdout'
- '"Installing ''namespace7.name:1.0.0'' to" in install_req_collection.stdout'
- (install_req_collection_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_collection_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: create test requirements file with just collections - {{ test_id }}
copy:
content: |
collections:
- namespace8.name
- name: namespace9.name
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
- name: install collections with ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}'
register: install_req
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: Test deviations on -r and --role-file without collection or role sub command
command: '{{ cmd }}'
loop:
- ansible-galaxy install -vr '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vv
- ansible-galaxy install --role-file '{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
- ansible-galaxy install --role-file='{{ galaxy_dir }}/ansible_collections/requirements.yaml' -s '{{ test_name }}' -vvv
loop_control:
loop_var: cmd
- name: uninstall collections for next requirements file test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: rewrite requirements file with collections and signatures
copy:
content: |
collections:
- name: namespace7.name
version: "1.0.0"
signatures:
- "{{ not_mine }}"
- "{{ also_not_mine }}"
- "file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
- namespace8.name
- name: namespace9.name
signatures:
- "file://{{ gpg_homedir }}/namespace9-name-1.0.0-MANIFEST.json.asc"
dest: '{{ galaxy_dir }}/ansible_collections/requirements.yaml'
vars:
not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
also_not_mine: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
- name: installing only roles does not fail if keyring for collections is not provided
command: ansible-galaxy role install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml
register: roles_only
- assert:
that:
- roles_only is success
- name: installing only roles implicitly does not fail if keyring for collections is not provided
# if -p/--roles-path are specified, only roles are installed
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }} -p {{ galaxy_dir }}
register: roles_only
- assert:
that:
- roles_only is success
- name: installing roles and collections requires keyring if collections have signatures
command: ansible-galaxy install -r {{ galaxy_dir }}/ansible_collections/requirements.yaml }}
ignore_errors: yes
register: collections_and_roles
- assert:
that:
- collections_and_roles is failed
- "'no keyring was configured' in collections_and_roles.stderr"
- name: install collection with mutually exclusive options
command: ansible-galaxy collection install -r {{ req_file }} -s {{ test_name }} {{ cli_signature }}
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
# --signature is an ansible-galaxy collection install subcommand, but mutually exclusive with -r
cli_signature: "--signature file://{{ gpg_homedir }}/namespace7-name-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: mutually_exclusive_opts
- assert:
that:
- mutually_exclusive_opts is failed
- expected_error in actual_error
vars:
expected_error: >-
The --signatures option and --requirements-file are mutually exclusive.
Use the --signatures with positional collection_name args or provide a
'signatures' key for requirements in the --requirements-file.
actual_error: "{{ mutually_exclusive_opts.stderr }}"
- name: install a collection with user-supplied signatures for verification but no keyring
command: ansible-galaxy collection install namespace1.name1:1.0.0 {{ cli_signature }}
vars:
cli_signature: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.0-MANIFEST.json.asc"
ignore_errors: yes
register: required_together
- assert:
that:
- required_together is failed
- '"ERROR! Signatures were provided to verify namespace1.name1 but no keyring was configured." in required_together.stderr'
- name: install collections with ansible-galaxy install -r with invalid signatures - {{ test_id }}
# Note that --keyring is a valid option for 'ansible-galaxy install -r ...', not just 'ansible-galaxy collection ...'
command: ansible-galaxy install -r {{ req_file }} -s {{ test_name }} --keyring {{ keyring }} {{ galaxy_verbosity }}
register: install_req
ignore_errors: yes
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
- name: assert invalid signature is fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is failed
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed" in install_req.stderr'
# The other collections shouldn't be installed because they're listed
# after the failing collection and --ignore-errors was not provided
- '"Installing ''namespace8.name:1.0.0'' to" not in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" not in install_req.stdout'
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collections with ansible-galaxy install and --ignore-errors - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} -vvvv
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }} --ignore-errors"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
# SIVEL
- name: assert invalid signature is not fatal with ansible-galaxy install --ignore-errors - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." in install_stderr'
- '"Failed to install collection namespace7.name:1.0.0 but skipping due to --ignore-errors being set." in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace8
- namespace9
- name: install collections with only one valid signature using ansible-galaxy install - {{ test_id }}
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }}
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert just one valid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: install collections with only one valid signature by ignoring the other errors
command: ansible-galaxy install -r {{ req_file }} {{ cli_opts }} {{ galaxy_verbosity }} --ignore-signature-status-code FAILURE
register: install_req
vars:
req_file: "{{ galaxy_dir }}/ansible_collections/requirements.yaml"
cli_opts: "-s {{ test_name }} --keyring {{ keyring }}"
keyring: "{{ gpg_homedir }}/pubring.kbx"
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: all
ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES: BADSIG # cli option is appended and both status codes are ignored
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
- name: get result of install collections with ansible-galaxy install - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name/MANIFEST.json'
register: install_req_actual
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
- name: assert invalid signature is not fatal with ansible-galaxy install - {{ test_id }}
assert:
that:
- install_req is success
- '"Installing ''namespace7.name:1.0.0'' to" in install_req.stdout'
- '"Signature verification failed for ''namespace7.name'' (return code 1)" not in install_req.stdout'
- '"Not installing namespace7.name because GnuPG signature verification failed." not in install_stderr'
- '"Installing ''namespace8.name:1.0.0'' to" in install_req.stdout'
- '"Installing ''namespace9.name:1.0.0'' to" in install_req.stdout'
- (install_req_actual.results[0].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- (install_req_actual.results[2].content | b64decode | from_json).collection_info.version == '1.0.0'
vars:
install_stderr: "{{ install_req.stderr | regex_replace('\\n', ' ') }}"
- name: clean up collections from last test
file:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/name'
state: absent
loop_control:
loop_var: collection
loop:
- namespace7
- namespace8
- namespace9
# Uncomment once pulp container is at pulp>=0.5.0
#- name: install cache.cache at the current latest version
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' -vvv
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- set_fact:
# cache_version_build: '{{ (cache_version_build | int) + 1 }}'
#
#- name: publish update for cache.cache test
# setup_collections:
# server: galaxy_ng
# collections:
# - namespace: cache
# name: cache
# version: 1.0.{{ cache_version_build }}
#
#- name: make sure the cache version list is ignored on a collection version change - {{ test_id }}
# command: ansible-galaxy collection install cache.cache -s '{{ test_name }}' --force -vvv
# register: install_cached_update
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
#
#- name: get result of cache version list is ignored on a collection version change - {{ test_id }}
# slurp:
# path: '{{ galaxy_dir }}/ansible_collections/cache/cache/MANIFEST.json'
# register: install_cached_update_actual
#
#- name: assert cache version list is ignored on a collection version change - {{ test_id }}
# assert:
# that:
# - '"Installing ''cache.cache:1.0.{{ cache_version_build }}'' to" in install_cached_update.stdout'
# - (install_cached_update_actual.content | b64decode | from_json).collection_info.version == '1.0.' ~ cache_version_build
- name: install collection with symlink - {{ test_id }}
command: ansible-galaxy collection install symlink.symlink -s '{{ test_name }}' {{ galaxy_verbosity }}
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_symlink
- find:
paths: '{{ galaxy_dir }}/ansible_collections/symlink/symlink'
recurse: yes
file_type: any
- name: get result of install collection with symlink - {{ test_id }}
stat:
path: '{{ galaxy_dir }}/ansible_collections/symlink/symlink/{{ path }}'
register: install_symlink_actual
loop_control:
loop_var: path
loop:
- REÅDMÈ.md-link
- docs/REÅDMÈ.md
- plugins/REÅDMÈ.md
- REÅDMÈ.md-outside-link
- docs-link
- docs-link/REÅDMÈ.md
- name: assert install collection with symlink - {{ test_id }}
assert:
that:
- '"Installing ''symlink.symlink:1.0.0'' to" in install_symlink.stdout'
- install_symlink_actual.results[0].stat.islnk
- install_symlink_actual.results[0].stat.lnk_target == 'REÅDMÈ.md'
- install_symlink_actual.results[1].stat.islnk
- install_symlink_actual.results[1].stat.lnk_target == '../REÅDMÈ.md'
- install_symlink_actual.results[2].stat.islnk
- install_symlink_actual.results[2].stat.lnk_target == '../REÅDMÈ.md'
- install_symlink_actual.results[3].stat.isreg
- install_symlink_actual.results[4].stat.islnk
- install_symlink_actual.results[4].stat.lnk_target == 'docs'
- install_symlink_actual.results[5].stat.islnk
- install_symlink_actual.results[5].stat.lnk_target == '../REÅDMÈ.md'
- name: remove install directory for the next test because parent_dep.parent_collection was installed - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
- name: install collection and dep compatible with multiple requirements - {{ test_id }}
command: ansible-galaxy collection install parent_dep.parent_collection parent_dep2.parent_collection
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''parent_dep.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''parent_dep2.parent_collection:1.0.0'' to" in install_req.stdout'
- '"Installing ''child_dep.child_collection:0.5.0'' to" in install_req.stdout'
- name: install a collection to a directory that contains another collection with no metadata
block:
# Collections are usable in ansible without a galaxy.yml or MANIFEST.json
- name: create a collection directory
file:
state: directory
path: '{{ galaxy_dir }}/ansible_collections/unrelated_namespace/collection_without_metadata/plugins'
- name: install a collection to the same installation directory - {{ test_id }}
command: ansible-galaxy collection install namespace1.name1
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_req
- name: assert installed collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in install_req.stdout'
- name: remove test collection install directory - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install collection with signature with invalid keyring
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_option }} {{ keyring_option }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_option: "--signature file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring_option: '--keyring {{ gpg_homedir }}/i_do_not_exist.kbx'
ignore_errors: yes
register: keyring_error
- assert:
that:
- keyring_error is failed
- expected_errors[0] in actual_error
- expected_errors[1] in actual_error
- expected_errors[2] in actual_error
- unexpected_warning not in actual_warning
vars:
keyring: "{{ gpg_homedir }}/i_do_not_exist.kbx"
expected_errors:
- "Signature verification failed for 'namespace1.name1' (return code 2):"
- "* The public key is not available."
- >-
* It was not possible to check the signature. This may be caused
by a missing public key or an unsupported algorithm. A RC of 4
indicates unknown algorithm, a 9 indicates a missing public key.
unexpected_warning: >-
The GnuPG keyring used for collection signature
verification was not configured but signatures were
provided by the Galaxy server to verify authenticity.
Configure a keyring for ansible-galaxy to use
or disable signature verification.
Skipping signature verification.
actual_warning: "{{ keyring_error.stderr | regex_replace('\\n', ' ') }}"
# Remove formatting from the reason so it's one line
actual_error: "{{ keyring_error.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
# TODO: Uncomment once signatures are provided by pulp-galaxy-ng
#- name: install collection with signature provided by Galaxy server (no keyring)
# command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }}
# environment:
# ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
# ANSIBLE_NOCOLOR: True
# ANSIBLE_FORCE_COLOR: False
# ignore_errors: yes
# register: keyring_warning
#
#- name: assert a warning was given but signature verification did not occur without configuring the keyring
# assert:
# that:
# - keyring_warning is not failed
# - - '"Installing ''namespace1.name1:1.0.9'' to" in keyring_warning.stdout'
# # TODO: Don't just check the stdout, make sure the collection was installed.
# - expected_warning in actual_warning
# vars:
# expected_warning: >-
# The GnuPG keyring used for collection signature
# verification was not configured but signatures were
# provided by the Galaxy server to verify authenticity.
# Configure a keyring for ansible-galaxy to use
# or disable signature verification.
# Skipping signature verification.
# actual_warning: "{{ keyring_warning.stderr | regex_replace('\\n', ' ') }}"
- name: install simple collection from first accessible server with valid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace1-name1-1.0.9-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
register: from_first_good_server
- name: get installed files of install simple collection from first good server
find:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
file_type: file
register: install_normal_files
- name: get the manifest of install simple collection from first good server
slurp:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1/MANIFEST.json'
register: install_normal_manifest
- name: assert install simple collection from first good server
assert:
that:
- '"Installing ''namespace1.name1:1.0.9'' to" in from_first_good_server.stdout'
- install_normal_files.files | length == 3
- install_normal_files.files[0].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[1].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- install_normal_files.files[2].path | basename in ['MANIFEST.json', 'FILES.json', 'README.md']
- (install_normal_manifest.content | b64decode | from_json).collection_info.version == '1.0.9'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
# This command is hardcoded with -vvvv purposefully to evaluate extra verbosity messages
- name: install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 -vvvv {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
ANSIBLE_NOCOLOR: True
ANSIBLE_FORCE_COLOR: False
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }}"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: invalid_signature
- assert:
that:
- invalid_signature is failed
- "'Not installing namespace1.name1 because GnuPG signature verification failed.' in invalid_signature.stderr"
- expected_errors[0] in install_stdout
- expected_errors[1] in install_stdout
vars:
expected_errors:
- "* This is the counterpart to SUCCESS and used to indicate a program failure."
- "* The signature with the keyid has not been verified okay."
# Remove formatting from the reason so it's one line
install_stdout: "{{ invalid_signature.stdout | regex_replace('\"') | regex_replace('\\n') | regex_replace(' ', ' ') }}"
- name: validate collection directory was not created
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1/name1'
state: absent
register: collection_dir
check_mode: yes
failed_when: collection_dir is changed
- name: disable signature verification and install simple collection with invalid detached signature
command: ansible-galaxy collection install namespace1.name1 {{ galaxy_verbosity }} {{ signature_options }}
environment:
ANSIBLE_COLLECTIONS_PATH: '{{ galaxy_dir }}/ansible_collections'
vars:
signature_options: "--signature {{ signature }} --keyring {{ keyring }} --disable-gpg-verify"
signature: "file://{{ gpg_homedir }}/namespace2-name-1.0.0-MANIFEST.json.asc"
keyring: "{{ gpg_homedir }}/pubring.kbx"
ignore_errors: yes
register: ignore_invalid_signature
- assert:
that:
- ignore_invalid_signature is success
- '"Installing ''namespace1.name1:1.0.9'' to" in ignore_invalid_signature.stdout'
- name: use lenient signature verification (default) without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "all"
register: missing_signature
- assert:
that:
- missing_signature is success
- missing_signature.rc == 0
- '"namespace1.name1:1.0.0 was installed successfully" in missing_signature.stdout'
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" not in missing_signature.stdout'
- name: use strict signature verification without providing signatures
command: ansible-galaxy collection install namespace1.name1:1.0.0 -vvvv --keyring {{ gpg_homedir }}/pubring.kbx --force
environment:
ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT: "+1"
ignore_errors: yes
register: missing_signature
- assert:
that:
- missing_signature is failed
- missing_signature.rc == 1
- '"Signature verification failed for ''namespace1.name1'': no successful signatures" in missing_signature.stdout'
- '"Not installing namespace1.name1 because GnuPG signature verification failed" in missing_signature.stderr'
- name: Remove the collection
file:
path: '{{ galaxy_dir }}/ansible_collections/namespace1'
state: absent
- name: download collections with pre-release dep - {{ test_id }}
command: ansible-galaxy collection download dep_with_beta.parent namespace1.name1:1.1.0-beta.1 -p '{{ galaxy_dir }}/scratch'
- name: install collection with concrete pre-release dep - {{ test_id }}
command: ansible-galaxy collection install -r '{{ galaxy_dir }}/scratch/requirements.yml'
args:
chdir: '{{ galaxy_dir }}/scratch'
environment:
ANSIBLE_COLLECTIONS_PATHS: '{{ galaxy_dir }}/ansible_collections'
register: install_concrete_pre
- name: get result of install collections with concrete pre-release dep - {{ test_id }}
slurp:
path: '{{ galaxy_dir }}/ansible_collections/{{ collection }}/MANIFEST.json'
register: install_concrete_pre_actual
loop_control:
loop_var: collection
loop:
- namespace1/name1
- dep_with_beta/parent
- name: assert install collections with ansible-galaxy install - {{ test_id }}
assert:
that:
- '"Installing ''namespace1.name1:1.1.0-beta.1'' to" in install_concrete_pre.stdout'
- '"Installing ''dep_with_beta.parent:1.0.0'' to" in install_concrete_pre.stdout'
- (install_concrete_pre_actual.results[0].content | b64decode | from_json).collection_info.version == '1.1.0-beta.1'
- (install_concrete_pre_actual.results[1].content | b64decode | from_json).collection_info.version == '1.0.0'
- name: remove collection dir after round of testing - {{ test_id }}
file:
path: '{{ galaxy_dir }}/ansible_collections'
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,536 |
Copy module does not copy all files recursively in remote_src mode
|
### Summary
When the _src_ and _dst_ has some inner folders in common, the inner folder has some nested folders in common and some are different, the copy command does not copy the content of the nested inner folder. The problem only happens in _remote_src_ mode
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
ansible 2.10.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tngo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tngo/.pyenv/versions/3.6.8/lib/python3.6/site-packages/ansible
executable location = /home/tngo/.pyenv/versions/3.6.8/bin/ansible
python version = 3.6.8 (default, Dec 6 2019, 10:08:19) [GCC 7.4.0]
```
### Configuration
```console
Tested with `-c local`
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# test.yml
- hosts: all
tasks:
- copy:
src: /tmp/src/
dest: /tmp/dest/
remote_src: yes
```
```
rm -rf /tmp/src/ /tmp/dest/
mkdir -p /tmp/src/a/b1/c1
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
rm -rf /tmp/src/
mkdir -p /tmp/src/a/b1/c2
mkdir -p /tmp/src/a/b2/c3
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
```
### Expected Results
The folder _/tmp/src/a/b1/c2_ must be copied to _/tmp/dest/a/b1/c2_
### Actual Results
```console
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
└── b1
└── c1
3 directories, 0 files
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
├── b1
│ └── c1
└── b2
└── c3
5 directories, 0 files
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74536
|
https://github.com/ansible/ansible/pull/76997
|
27ab589ee874cac7aad65cfb3630a5b38082e4b8
|
e208fe59329a45966d23f28bd92c0ee5592ac71b
| 2021-05-03T14:40:00Z |
python
| 2022-10-07T18:24:46Z |
changelogs/fragments/76997-fix-copy-subdirs-with-remote-src.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,536 |
Copy module does not copy all files recursively in remote_src mode
|
### Summary
When the _src_ and _dst_ has some inner folders in common, the inner folder has some nested folders in common and some are different, the copy command does not copy the content of the nested inner folder. The problem only happens in _remote_src_ mode
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
ansible 2.10.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tngo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tngo/.pyenv/versions/3.6.8/lib/python3.6/site-packages/ansible
executable location = /home/tngo/.pyenv/versions/3.6.8/bin/ansible
python version = 3.6.8 (default, Dec 6 2019, 10:08:19) [GCC 7.4.0]
```
### Configuration
```console
Tested with `-c local`
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# test.yml
- hosts: all
tasks:
- copy:
src: /tmp/src/
dest: /tmp/dest/
remote_src: yes
```
```
rm -rf /tmp/src/ /tmp/dest/
mkdir -p /tmp/src/a/b1/c1
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
rm -rf /tmp/src/
mkdir -p /tmp/src/a/b1/c2
mkdir -p /tmp/src/a/b2/c3
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
```
### Expected Results
The folder _/tmp/src/a/b1/c2_ must be copied to _/tmp/dest/a/b1/c2_
### Actual Results
```console
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
└── b1
└── c1
3 directories, 0 files
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
├── b1
│ └── c1
└── b2
└── c3
5 directories, 0 files
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74536
|
https://github.com/ansible/ansible/pull/76997
|
27ab589ee874cac7aad65cfb3630a5b38082e4b8
|
e208fe59329a45966d23f28bd92c0ee5592ac71b
| 2021-05-03T14:40:00Z |
python
| 2022-10-07T18:24:46Z |
lib/ansible/modules/copy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The C(copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(ansible.builtin.fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(ansible.builtin.template) module.
Using a variable in the C(content) field will result in unpredictable output.
- For Windows targets, use the M(ansible.windows.win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of C(src), sets the contents of a file directly to the specified value.
- Works only when C(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if C(content) contains a variable, use the
M(ansible.builtin.template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If C(src) is a directory, this must be a directory too.
- If C(dest) is a non-existent path and if either C(dest) ends with "/" or C(src) is a directory, C(dest) is created.
- If I(dest) is a relative path, the starting directory is determined by the remote host.
- If C(src) and C(dest) are files, the parent directory of C(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If C(yes), the remote file will be replaced when contents are different than the source.
- If C(no), the file will only be transferred if the destination does not exist.
type: bool
default: yes
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string C(preserve).
- C(preserve) means that the file will be given the same permissions as the source file.
- When doing a recursive copy, see also C(directory_mode).
- If C(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If C(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying C(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
directory_mode:
description:
- When doing a recursive copy set the mode for the directories.
- If this is not set we will use the system defaults.
- The mode is only set on directories which are newly created, and will not affect those that already existed.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether C(src) needs to be transferred or already is present remotely.
- If C(no), it will search for C(src) on the controller node.
- If C(yes) it will search for C(src) on the managed (remote) node.
- C(remote_src) supports recursive copying as of version 2.8.
- C(remote_src) only works with C(mode=preserve) as of version 2.6.
- Autodecryption of files does not work when C(remote_src=yes).
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.flow
notes:
- The M(ansible.builtin.copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.fetch
- module: ansible.builtin.file
- module: ansible.builtin.template
- module: ansible.posix.synchronize
- module: ansible.windows.win_copy
author:
- Ansible Core Team
- Michael DeHaan
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
safe_file_operations:
support: full
vault:
support: full
version_added: '2.2'
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
ansible.builtin.copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
ansible.builtin.copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
ansible.builtin.copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy.
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created.
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution.
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution.
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution.
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution.
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution.
returned: success
type: str
sample: "0644"
size:
description: Size of the target, after execution.
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution.
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
"""Copy files that are different between `src` directory and `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
"""Copy files that exist in `src` directory only to the `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = changed or copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module)
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
follow=dict(type='bool', default=False),
),
add_file_common_args=True,
supports_check_mode=True,
)
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
checksum_dest = None
if os.path.isfile(src):
checksum_src = module.sha1(src)
else:
checksum_src = None
# Backwards compat only. This will be None in FIPS mode
try:
if os.path.isfile(src):
md5sum_src = module.md5(src)
else:
md5sum_src = None
except ValueError:
md5sum_src = None
changed = False
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if dest.endswith(os.sep):
if _original_basename:
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
os.makedirs(b_dirname)
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
_, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
else:
changed = False
# If neither have checksums, both src and dest are directories.
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
if not module.check_mode:
file_args = module.load_file_common_arguments(module.params, path=dest)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,536 |
Copy module does not copy all files recursively in remote_src mode
|
### Summary
When the _src_ and _dst_ has some inner folders in common, the inner folder has some nested folders in common and some are different, the copy command does not copy the content of the nested inner folder. The problem only happens in _remote_src_ mode
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
ansible 2.10.6
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/tngo/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/tngo/.pyenv/versions/3.6.8/lib/python3.6/site-packages/ansible
executable location = /home/tngo/.pyenv/versions/3.6.8/bin/ansible
python version = 3.6.8 (default, Dec 6 2019, 10:08:19) [GCC 7.4.0]
```
### Configuration
```console
Tested with `-c local`
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
# test.yml
- hosts: all
tasks:
- copy:
src: /tmp/src/
dest: /tmp/dest/
remote_src: yes
```
```
rm -rf /tmp/src/ /tmp/dest/
mkdir -p /tmp/src/a/b1/c1
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
rm -rf /tmp/src/
mkdir -p /tmp/src/a/b1/c2
mkdir -p /tmp/src/a/b2/c3
ansible-playbook -c local -i localhost, test.yml
tree /tmp/dest/
```
### Expected Results
The folder _/tmp/src/a/b1/c2_ must be copied to _/tmp/dest/a/b1/c2_
### Actual Results
```console
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
└── b1
└── c1
3 directories, 0 files
PLAY [all] ***************************************************************************************
TASK [Gathering Facts] ***************************************************************************
[DEPRECATION WARNING]: Distribution Ubuntu 18.04 on host localhost should use /usr/bin/python3,
but is using /usr/bin/python for backward compatibility with prior Ansible releases. A future
Ansible release will default to using the discovered platform python for this host. See
https://docs.ansible.com/ansible/2.10/reference_appendices/interpreter_discovery.html for more
information. This feature will be removed in version 2.12. Deprecation warnings can be disabled
by setting deprecation_warnings=False in ansible.cfg.
ok: [localhost]
TASK [copy] **************************************************************************************
changed: [localhost]
PLAY RECAP ***************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
/tmp/dest/
└── a
├── b1
│ └── c1
└── b2
└── c3
5 directories, 0 files
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74536
|
https://github.com/ansible/ansible/pull/76997
|
27ab589ee874cac7aad65cfb3630a5b38082e4b8
|
e208fe59329a45966d23f28bd92c0ee5592ac71b
| 2021-05-03T14:40:00Z |
python
| 2022-10-07T18:24:46Z |
test/integration/targets/copy/tasks/tests.yml
|
# test code for the copy module and action plugin
# (c) 2014, Michael DeHaan <[email protected]>
# (c) 2017, Ansible Project
#
# GNU General Public License v3 or later (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt )
#
- name: Record the output directory
set_fact:
remote_file: "{{ remote_dir }}/foo.txt"
- name: Initiate a basic copy, and also test the mode
copy:
src: foo.txt
dest: "{{ remote_file }}"
mode: 0444
register: copy_result
- name: Record the sha of the test file for later tests
set_fact:
remote_file_hash: "{{ copy_result['checksum'] }}"
- name: Check the mode of the output file
file:
name: "{{ remote_file }}"
state: file
register: file_result_check
- name: Assert the mode is correct
assert:
that:
- "file_result_check.mode == '0444'"
# same as expanduser & expandvars
- command: 'echo {{ remote_dir }}'
register: echo
- set_fact:
remote_dir_expanded: '{{ echo.stdout }}'
remote_file_expanded: '{{ echo.stdout }}/foo.txt'
- debug:
var: copy_result
verbosity: 1
- name: Assert basic copy worked
assert:
that:
- "'changed' in copy_result"
- copy_result.dest == remote_file_expanded
- "'group' in copy_result"
- "'gid' in copy_result"
- "'checksum' in copy_result"
- "'owner' in copy_result"
- "'size' in copy_result"
- "'src' in copy_result"
- "'state' in copy_result"
- "'uid' in copy_result"
- name: Verify that the file was marked as changed
assert:
that:
- "copy_result.changed == true"
- name: Verify that the file checksums are correct
assert:
that:
- "copy_result.checksum == ('foo.txt\n'|hash('sha1'))"
- name: Verify that the legacy md5sum is correct
assert:
that:
- "copy_result.md5sum == ('foo.txt\n'|hash('md5'))"
when: ansible_fips|bool != True
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- debug:
var: stat_results
verbosity: 1
- name: Assert the stat results are correct
assert:
that:
- "stat_results.stat.exists == true"
- "stat_results.stat.isblk == false"
- "stat_results.stat.isfifo == false"
- "stat_results.stat.isreg == true"
- "stat_results.stat.issock == false"
- "stat_results.stat.checksum == ('foo.txt\n'|hash('sha1'))"
- name: Overwrite the file via same means
copy:
src: foo.txt
dest: "{{ remote_file }}"
decrypt: no
register: copy_result2
- name: Assert that the file was not changed
assert:
that:
- "copy_result2 is not changed"
- name: Assert basic copy worked
assert:
that:
- "'changed' in copy_result2"
- copy_result2.dest == remote_file_expanded
- "'group' in copy_result2"
- "'gid' in copy_result2"
- "'checksum' in copy_result2"
- "'owner' in copy_result2"
- "'size' in copy_result2"
- "'state' in copy_result2"
- "'uid' in copy_result2"
- name: Overwrite the file using the content system
copy:
content: "modified"
dest: "{{ remote_file }}"
decrypt: no
register: copy_result3
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- debug:
var: stat_results
verbosity: 1
- name: Assert that the file has changed
assert:
that:
- "copy_result3 is changed"
- "'content' not in copy_result3"
- "stat_results.stat.checksum == ('modified'|hash('sha1'))"
- "stat_results.stat.mode != '0700'"
- name: Overwrite the file again using the content system, also passing along file params
copy:
content: "modified"
dest: "{{ remote_file }}"
mode: 0700
decrypt: no
register: copy_result4
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- debug:
var: stat_results
verbosity: 1
- name: Assert that the file has changed
assert:
that:
- "copy_result3 is changed"
- "'content' not in copy_result3"
- "stat_results.stat.checksum == ('modified'|hash('sha1'))"
- "stat_results.stat.mode == '0700'"
- name: Create a hardlink to the file
file:
src: '{{ remote_file }}'
dest: '{{ remote_dir }}/hard.lnk'
state: hard
- name: copy the same contents into place
copy:
content: 'modified'
dest: '{{ remote_file }}'
mode: 0700
decrypt: no
register: copy_results
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- name: Check the stat results of the hard link
stat:
path: "{{ remote_dir }}/hard.lnk"
register: hlink_results
- name: Check that the file did not change
assert:
that:
- 'stat_results.stat.inode == hlink_results.stat.inode'
- 'copy_results.changed == False'
- "stat_results.stat.checksum == ('modified'|hash('sha1'))"
- name: copy the same contents into place but change mode
copy:
content: 'modified'
dest: '{{ remote_file }}'
mode: 0404
decrypt: no
register: copy_results
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- name: Check the stat results of the hard link
stat:
path: "{{ remote_dir }}/hard.lnk"
register: hlink_results
- name: Check that the file changed permissions but is still the same
assert:
that:
- 'stat_results.stat.inode == hlink_results.stat.inode'
- 'copy_results.changed == True'
- 'stat_results.stat.mode == hlink_results.stat.mode'
- 'stat_results.stat.mode == "0404"'
- "stat_results.stat.checksum == ('modified'|hash('sha1'))"
- name: copy the different contents into place
copy:
content: 'adjusted'
dest: '{{ remote_file }}'
mode: 0404
register: copy_results
- name: Check the stat results of the file
stat:
path: "{{ remote_file }}"
register: stat_results
- name: Check the stat results of the hard link
stat:
path: "{{ remote_dir }}/hard.lnk"
register: hlink_results
- name: Check that the file changed and hardlink was broken
assert:
that:
- 'stat_results.stat.inode != hlink_results.stat.inode'
- 'copy_results.changed == True'
- "stat_results.stat.checksum == ('adjusted'|hash('sha1'))"
- "hlink_results.stat.checksum == ('modified'|hash('sha1'))"
- name: Try invalid copy input location fails
copy:
src: invalid_file_location_does_not_exist
dest: "{{ remote_dir }}/file.txt"
ignore_errors: True
register: failed_copy
- name: Assert that invalid source failed
assert:
that:
- "failed_copy.failed"
- "'invalid_file_location_does_not_exist' in failed_copy.msg"
- name: Try empty source to ensure it fails
copy:
src: ''
dest: "{{ remote_dir }}"
ignore_errors: True
register: failed_copy
- debug:
var: failed_copy
verbosity: 1
- name: Assert that empty source failed
assert:
that:
- failed_copy is failed
- "'src (or content) is required' in failed_copy.msg"
- name: Try without destination to ensure it fails
copy:
src: foo.txt
ignore_errors: True
register: failed_copy
- debug:
var: failed_copy
verbosity: 1
- name: Assert that missing destination failed
assert:
that:
- failed_copy is failed
- "'dest is required' in failed_copy.msg"
- name: Try without source to ensure it fails
copy:
dest: "{{ remote_file }}"
ignore_errors: True
register: failed_copy
- debug:
var: failed_copy
verbosity: 1
- name: Assert that missing source failed
assert:
that:
- failed_copy is failed
- "'src (or content) is required' in failed_copy.msg"
- name: Try with both src and content to ensure it fails
copy:
src: foo.txt
content: testing
dest: "{{ remote_file }}"
ignore_errors: True
register: failed_copy
- name: Assert that mutually exclusive parameters failed
assert:
that:
- failed_copy is failed
- "'mutually exclusive' in failed_copy.msg"
- name: Try with content and directory as destination to ensure it fails
copy:
content: testing
dest: "{{ remote_dir }}"
ignore_errors: True
register: failed_copy
- debug:
var: failed_copy
verbosity: 1
- name: Assert that content and directory as destination failed
assert:
that:
- failed_copy is failed
- "'can not use content with a dir as dest' in failed_copy.msg"
- name: Clean up
file:
path: "{{ remote_file }}"
state: absent
- name: Copy source file to destination directory with mode
copy:
src: foo.txt
dest: "{{ remote_dir }}"
mode: 0500
register: copy_results
- name: Check the stat results of the file
stat:
path: '{{ remote_file }}'
register: stat_results
- debug:
var: stat_results
verbosity: 1
- name: Assert that the file has changed
assert:
that:
- "copy_results is changed"
- "stat_results.stat.checksum == ('foo.txt\n'|hash('sha1'))"
- "stat_results.stat.mode == '0500'"
# Test copy with mode=preserve
- name: Create file and set perms to an odd value
copy:
content: "foo.txt\n"
dest: '{{ local_temp_dir }}/foo.txt'
mode: 0547
delegate_to: localhost
- name: Copy with mode=preserve
copy:
src: '{{ local_temp_dir }}/foo.txt'
dest: '{{ remote_dir }}/copy-foo.txt'
mode: preserve
register: copy_results
- name: Check the stat results of the file
stat:
path: '{{ remote_dir }}/copy-foo.txt'
register: stat_results
- name: Assert that the file has changed and has correct mode
assert:
that:
- "copy_results is changed"
- "copy_results.mode == '0547'"
- "stat_results.stat.checksum == ('foo.txt\n'|hash('sha1'))"
- "stat_results.stat.mode == '0547'"
- name: Test copy with mode=preserve and remote_src=True
copy:
src: '{{ remote_dir }}/copy-foo.txt'
dest: '{{ remote_dir }}/copy-foo2.txt'
mode: 'preserve'
remote_src: True
register: copy_results2
- name: Check the stat results of the file
stat:
path: '{{ remote_dir }}/copy-foo2.txt'
register: stat_results2
- name: Assert that the file has changed and has correct mode
assert:
that:
- "copy_results2 is changed"
- "copy_results2.mode == '0547'"
- "stat_results2.stat.checksum == ('foo.txt\n'|hash('sha1'))"
- "stat_results2.stat.mode == '0547'"
#
# test recursive copy local_follow=False, no trailing slash
#
- name: Create empty directory in the role we're copying from (git can't store empty dirs)
file:
path: '{{ role_path }}/files/subdir/subdira'
state: directory
delegate_to: localhost
- name: Set the output subdirectory
set_fact:
remote_subdir: "{{ remote_dir }}/sub"
- name: Make an output subdirectory
file:
name: "{{ remote_subdir }}"
state: directory
- name: Setup link target for absolute link
copy:
dest: /tmp/ansible-test-abs-link
content: target
delegate_to: localhost
- name: Setup link target dir for absolute link
file:
dest: /tmp/ansible-test-abs-link-dir
state: directory
delegate_to: localhost
- name: Test recursive copy to directory no trailing slash, local_follow=False
copy:
src: subdir
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: False
register: recursive_copy_result
- debug:
var: recursive_copy_result
verbosity: 1
- name: Assert that the recursive copy did something
assert:
that:
- "recursive_copy_result is changed"
- name: Check that a file in a directory was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/bar.txt"
register: stat_bar
- name: Check that a file in a deeper directory was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/subdir2/baz.txt"
register: stat_bar2
- name: Check that a file in a directory whose parent contains a directory alone was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/subdir2/subdir3/subdir4/qux.txt"
register: stat_bar3
- name: Assert recursive copy files
assert:
that:
- "stat_bar.stat.exists"
- "stat_bar2.stat.exists"
- "stat_bar3.stat.exists"
- name: Check symlink to absolute path
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/ansible-test-abs-link'
register: stat_abs_link
- name: Check symlink to relative path
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/bar.txt'
register: stat_relative_link
- name: Check symlink to self
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/invalid'
register: stat_self_link
- name: Check symlink to nonexistent file
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/invalid2'
register: stat_invalid_link
- name: Check symlink to directory in copy
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/subdir3'
register: stat_dir_in_copy_link
- name: Check symlink to directory outside of copy
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/ansible-test-abs-link-dir'
register: stat_dir_outside_copy_link
- name: Assert recursive copy symlinks local_follow=False
assert:
that:
- "stat_abs_link.stat.exists"
- "stat_abs_link.stat.islnk"
- "'/tmp/ansible-test-abs-link' == stat_abs_link.stat.lnk_target"
- "stat_relative_link.stat.exists"
- "stat_relative_link.stat.islnk"
- "'../bar.txt' == stat_relative_link.stat.lnk_target"
- "stat_self_link.stat.exists"
- "stat_self_link.stat.islnk"
- "'invalid' in stat_self_link.stat.lnk_target"
- "stat_invalid_link.stat.exists"
- "stat_invalid_link.stat.islnk"
- "'../invalid' in stat_invalid_link.stat.lnk_target"
- "stat_dir_in_copy_link.stat.exists"
- "stat_dir_in_copy_link.stat.islnk"
- "'../subdir2/subdir3' in stat_dir_in_copy_link.stat.lnk_target"
- "stat_dir_outside_copy_link.stat.exists"
- "stat_dir_outside_copy_link.stat.islnk"
- "'/tmp/ansible-test-abs-link-dir' == stat_dir_outside_copy_link.stat.lnk_target"
- name: Stat the recursively copied directories
stat:
path: "{{ remote_dir }}/sub/{{ item }}"
register: dir_stats
with_items:
- "subdir"
- "subdir/subdira"
- "subdir/subdir1"
- "subdir/subdir2"
- "subdir/subdir2/subdir3"
- "subdir/subdir2/subdir3/subdir4"
- debug:
var: stat_results
verbosity: 1
- name: Assert recursive copied directories mode (1)
assert:
that:
- "item.stat.exists"
- "item.stat.mode == '0700'"
with_items: "{{dir_stats.results}}"
- name: Test recursive copy to directory no trailing slash, local_follow=False second time
copy:
src: subdir
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: False
register: recursive_copy_result
- name: Assert that the second copy did not change anything
assert:
that:
- "recursive_copy_result is not changed"
- name: Cleanup the recursive copy subdir
file:
name: "{{ remote_subdir }}"
state: absent
#
# Recursive copy with local_follow=False, trailing slash
#
- name: Set the output subdirectory
set_fact:
remote_subdir: "{{ remote_dir }}/sub"
- name: Make an output subdirectory
file:
name: "{{ remote_subdir }}"
state: directory
- name: Setup link target for absolute link
copy:
dest: /tmp/ansible-test-abs-link
content: target
delegate_to: localhost
- name: Setup link target dir for absolute link
file:
dest: /tmp/ansible-test-abs-link-dir
state: directory
delegate_to: localhost
- name: Test recursive copy to directory trailing slash, local_follow=False
copy:
src: subdir/
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: False
register: recursive_copy_result
- debug:
var: recursive_copy_result
verbosity: 1
- name: Assert that the recursive copy did something
assert:
that:
- "recursive_copy_result is changed"
- name: Check that a file in a directory was transferred
stat:
path: "{{ remote_dir }}/sub/bar.txt"
register: stat_bar
- name: Check that a file in a deeper directory was transferred
stat:
path: "{{ remote_dir }}/sub/subdir2/baz.txt"
register: stat_bar2
- name: Check that a file in a directory whose parent contains a directory alone was transferred
stat:
path: "{{ remote_dir }}/sub/subdir2/subdir3/subdir4/qux.txt"
register: stat_bar3
- name: Assert recursive copy files
assert:
that:
- "stat_bar.stat.exists"
- "stat_bar2.stat.exists"
- "stat_bar3.stat.exists"
- name: Check symlink to absolute path
stat:
path: '{{ remote_dir }}/sub/subdir1/ansible-test-abs-link'
register: stat_abs_link
- name: Check symlink to relative path
stat:
path: '{{ remote_dir }}/sub/subdir1/bar.txt'
register: stat_relative_link
- name: Check symlink to self
stat:
path: '{{ remote_dir }}/sub/subdir1/invalid'
register: stat_self_link
- name: Check symlink to nonexistent file
stat:
path: '{{ remote_dir }}/sub/subdir1/invalid2'
register: stat_invalid_link
- name: Check symlink to directory in copy
stat:
path: '{{ remote_dir }}/sub/subdir1/subdir3'
register: stat_dir_in_copy_link
- name: Check symlink to directory outside of copy
stat:
path: '{{ remote_dir }}/sub/subdir1/ansible-test-abs-link-dir'
register: stat_dir_outside_copy_link
- name: Assert recursive copy symlinks local_follow=False trailing slash
assert:
that:
- "stat_abs_link.stat.exists"
- "stat_abs_link.stat.islnk"
- "'/tmp/ansible-test-abs-link' == stat_abs_link.stat.lnk_target"
- "stat_relative_link.stat.exists"
- "stat_relative_link.stat.islnk"
- "'../bar.txt' == stat_relative_link.stat.lnk_target"
- "stat_self_link.stat.exists"
- "stat_self_link.stat.islnk"
- "'invalid' in stat_self_link.stat.lnk_target"
- "stat_invalid_link.stat.exists"
- "stat_invalid_link.stat.islnk"
- "'../invalid' in stat_invalid_link.stat.lnk_target"
- "stat_dir_in_copy_link.stat.exists"
- "stat_dir_in_copy_link.stat.islnk"
- "'../subdir2/subdir3' in stat_dir_in_copy_link.stat.lnk_target"
- "stat_dir_outside_copy_link.stat.exists"
- "stat_dir_outside_copy_link.stat.islnk"
- "'/tmp/ansible-test-abs-link-dir' == stat_dir_outside_copy_link.stat.lnk_target"
- name: Stat the recursively copied directories
stat:
path: "{{ remote_dir }}/sub/{{ item }}"
register: dir_stats
with_items:
- "subdira"
- "subdir1"
- "subdir2"
- "subdir2/subdir3"
- "subdir2/subdir3/subdir4"
- debug:
var: dir_stats
verbosity: 1
- name: Assert recursive copied directories mode (2)
assert:
that:
- "item.stat.mode == '0700'"
with_items: "{{dir_stats.results}}"
- name: Test recursive copy to directory trailing slash, local_follow=False second time
copy:
src: subdir/
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: False
register: recursive_copy_result
- name: Assert that the second copy did not change anything
assert:
that:
- "recursive_copy_result is not changed"
- name: Cleanup the recursive copy subdir
file:
name: "{{ remote_subdir }}"
state: absent
#
# test recursive copy local_follow=True, no trailing slash
#
- name: Set the output subdirectory
set_fact:
remote_subdir: "{{ remote_dir }}/sub"
- name: Make an output subdirectory
file:
name: "{{ remote_subdir }}"
state: directory
- name: Setup link target for absolute link
copy:
dest: /tmp/ansible-test-abs-link
content: target
delegate_to: localhost
- name: Setup link target dir for absolute link
file:
dest: /tmp/ansible-test-abs-link-dir
state: directory
delegate_to: localhost
- name: Test recursive copy to directory no trailing slash, local_follow=True
copy:
src: subdir
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: True
register: recursive_copy_result
- debug:
var: recursive_copy_result
verbosity: 1
- name: Assert that the recursive copy did something
assert:
that:
- "recursive_copy_result is changed"
- name: Check that a file in a directory was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/bar.txt"
register: stat_bar
- name: Check that a file in a deeper directory was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/subdir2/baz.txt"
register: stat_bar2
- name: Check that a file in a directory whose parent contains a directory alone was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/subdir2/subdir3/subdir4/qux.txt"
register: stat_bar3
- name: Check that a file in a directory whose parent is a symlink was transferred
stat:
path: "{{ remote_dir }}/sub/subdir/subdir1/subdir3/subdir4/qux.txt"
register: stat_bar4
- name: Assert recursive copy files
assert:
that:
- "stat_bar.stat.exists"
- "stat_bar2.stat.exists"
- "stat_bar3.stat.exists"
- "stat_bar4.stat.exists"
- name: Check symlink to absolute path
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/ansible-test-abs-link'
register: stat_abs_link
- name: Check symlink to relative path
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/bar.txt'
register: stat_relative_link
- name: Check symlink to self
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/invalid'
register: stat_self_link
- name: Check symlink to nonexistent file
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/invalid2'
register: stat_invalid_link
- name: Check symlink to directory in copy
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/subdir3'
register: stat_dir_in_copy_link
- name: Check symlink to directory outside of copy
stat:
path: '{{ remote_dir }}/sub/subdir/subdir1/ansible-test-abs-link-dir'
register: stat_dir_outside_copy_link
- name: Assert recursive copy symlinks local_follow=True
assert:
that:
- "stat_abs_link.stat.exists"
- "not stat_abs_link.stat.islnk"
- "stat_abs_link.stat.checksum == ('target'|hash('sha1'))"
- "stat_relative_link.stat.exists"
- "not stat_relative_link.stat.islnk"
- "stat_relative_link.stat.checksum == ('baz\n'|hash('sha1'))"
- "stat_self_link.stat.exists"
- "stat_self_link.stat.islnk"
- "'invalid' in stat_self_link.stat.lnk_target"
- "stat_invalid_link.stat.exists"
- "stat_invalid_link.stat.islnk"
- "'../invalid' in stat_invalid_link.stat.lnk_target"
- "stat_dir_in_copy_link.stat.exists"
- "not stat_dir_in_copy_link.stat.islnk"
- "stat_dir_in_copy_link.stat.isdir"
-
- "stat_dir_outside_copy_link.stat.exists"
- "not stat_dir_outside_copy_link.stat.islnk"
- "stat_dir_outside_copy_link.stat.isdir"
- name: Stat the recursively copied directories
stat:
path: "{{ remote_dir }}/sub/{{ item }}"
register: dir_stats
with_items:
- "subdir"
- "subdir/subdira"
- "subdir/subdir1"
- "subdir/subdir1/subdir3"
- "subdir/subdir1/subdir3/subdir4"
- "subdir/subdir2"
- "subdir/subdir2/subdir3"
- "subdir/subdir2/subdir3/subdir4"
- debug:
var: dir_stats
verbosity: 1
- name: Assert recursive copied directories mode (3)
assert:
that:
- "item.stat.mode == '0700'"
with_items: "{{dir_stats.results}}"
- name: Test recursive copy to directory no trailing slash, local_follow=True second time
copy:
src: subdir
dest: "{{ remote_subdir }}"
directory_mode: 0700
local_follow: True
register: recursive_copy_result
- name: Assert that the second copy did not change anything
assert:
that:
- "recursive_copy_result is not changed"
- name: Cleanup the recursive copy subdir
file:
name: "{{ remote_subdir }}"
state: absent
#
# Recursive copy of tricky symlinks
#
- block:
- name: Create a directory to copy from
file:
path: '{{ local_temp_dir }}/source1'
state: directory
- name: Create a directory outside of the tree
file:
path: '{{ local_temp_dir }}/source2'
state: directory
- name: Create a symlink to a directory outside of the tree
file:
path: '{{ local_temp_dir }}/source1/link'
src: '{{ local_temp_dir }}/source2'
state: link
- name: Create a circular link back to the tree
file:
path: '{{ local_temp_dir }}/source2/circle'
src: '../source1'
state: link
- name: Create output directory
file:
path: '{{ local_temp_dir }}/dest1'
state: directory
delegate_to: localhost
- name: Recursive copy the source
copy:
src: '{{ local_temp_dir }}/source1'
dest: '{{ remote_dir }}/dest1'
local_follow: True
register: copy_result
- name: Check that the tree link is now a directory
stat:
path: '{{ remote_dir }}/dest1/source1/link'
register: link_result
- name: Check that the out of tree link is still a link
stat:
path: '{{ remote_dir }}/dest1/source1/link/circle'
register: circle_result
- name: Verify that the recursive copy worked
assert:
that:
- 'copy_result.changed'
- 'link_result.stat.isdir'
- 'not link_result.stat.islnk'
- 'circle_result.stat.islnk'
- '"../source1" == circle_result.stat.lnk_target'
- name: Recursive copy the source a second time
copy:
src: '{{ local_temp_dir }}/source1'
dest: '{{ remote_dir }}/dest1'
local_follow: True
register: copy_result
- name: Verify that the recursive copy made no changes
assert:
that:
- 'not copy_result.changed'
#
# Recursive copy with absolute paths (#27439)
#
- name: Test that remote_dir is appropriate for this test (absolute path)
assert:
that:
- '{{ remote_dir_expanded[0] == "/" }}'
- block:
- name: Create a directory to copy
file:
path: '{{ local_temp_dir }}/source_recursive'
state: directory
- name: Create a file inside of the directory
copy:
content: "testing"
dest: '{{ local_temp_dir }}/source_recursive/file'
- name: Create a directory to place the test output in
file:
path: '{{ local_temp_dir }}/destination'
state: directory
delegate_to: localhost
- name: Copy the directory and files within (no trailing slash)
copy:
src: '{{ local_temp_dir }}/source_recursive'
dest: '{{ remote_dir }}/destination'
- name: Stat the recursively copied directory
stat:
path: "{{ remote_dir }}/destination/{{ item }}"
register: copied_stat
with_items:
- "source_recursive"
- "source_recursive/file"
- "file"
- debug:
var: copied_stat
verbosity: 1
- name: Assert with no trailing slash, directory and file is copied
assert:
that:
- "copied_stat.results[0].stat.exists"
- "copied_stat.results[1].stat.exists"
- "not copied_stat.results[2].stat.exists"
- name: Cleanup
file:
path: '{{ remote_dir }}/destination'
state: absent
# Try again with no trailing slash
- name: Create a directory to place the test output in
file:
path: '{{ remote_dir }}/destination'
state: directory
- name: Copy just the files inside of the directory
copy:
src: '{{ local_temp_dir }}/source_recursive/'
dest: '{{ remote_dir }}/destination'
- name: Stat the recursively copied directory
stat:
path: "{{ remote_dir }}/destination/{{ item }}"
register: copied_stat
with_items:
- "source_recursive"
- "source_recursive/file"
- "file"
- debug:
var: copied_stat
verbosity: 1
- name: Assert with trailing slash, only the file is copied
assert:
that:
- "not copied_stat.results[0].stat.exists"
- "not copied_stat.results[1].stat.exists"
- "copied_stat.results[2].stat.exists"
#
# Recursive copy with relative paths (#34893)
#
- name: Create a directory to copy
file:
path: 'source_recursive'
state: directory
delegate_to: localhost
- name: Create a file inside of the directory
copy:
content: "testing"
dest: 'source_recursive/file'
delegate_to: localhost
- name: Create a directory to place the test output in
file:
path: 'destination'
state: directory
delegate_to: localhost
- name: Copy the directory and files within (no trailing slash)
copy:
src: 'source_recursive'
dest: 'destination'
- name: Stat the recursively copied directory
stat:
path: "destination/{{ item }}"
register: copied_stat
with_items:
- "source_recursive"
- "source_recursive/file"
- "file"
- debug:
var: copied_stat
verbosity: 1
- name: Assert with no trailing slash, directory and file is copied
assert:
that:
- "copied_stat.results[0].stat.exists"
- "copied_stat.results[1].stat.exists"
- "not copied_stat.results[2].stat.exists"
- name: Cleanup
file:
path: 'destination'
state: absent
# Try again with no trailing slash
- name: Create a directory to place the test output in
file:
path: 'destination'
state: directory
- name: Copy just the files inside of the directory
copy:
src: 'source_recursive/'
dest: 'destination'
- name: Stat the recursively copied directory
stat:
path: "destination/{{ item }}"
register: copied_stat
with_items:
- "source_recursive"
- "source_recursive/file"
- "file"
- debug:
var: copied_stat
verbosity: 1
- name: Assert with trailing slash, only the file is copied
assert:
that:
- "not copied_stat.results[0].stat.exists"
- "not copied_stat.results[1].stat.exists"
- "copied_stat.results[2].stat.exists"
- name: Cleanup
file:
path: 'destination'
state: absent
- name: Cleanup
file:
path: 'source_recursive'
state: absent
#
# issue 8394
#
- name: Create a file with content and a literal multiline block
copy:
content: |
this is the first line
this is the second line
this line is after an empty line
this line is the last line
dest: "{{ remote_dir }}/multiline.txt"
register: copy_result6
- debug:
var: copy_result6
verbosity: 1
- name: Assert the multiline file was created correctly
assert:
that:
- "copy_result6.changed"
- "copy_result6.dest == '{{remote_dir_expanded}}/multiline.txt'"
- "copy_result6.checksum == '9cd0697c6a9ff6689f0afb9136fa62e0b3fee903'"
# test overwriting a file as an unprivileged user (pull request #8624)
# this can't be relative to {{remote_dir}} as ~root usually has mode 700
- block:
- name: Create world writable directory
file:
dest: /tmp/worldwritable
state: directory
mode: 0777
- name: Create world writable file
copy:
dest: /tmp/worldwritable/file.txt
content: "bar"
mode: 0666
- name: Overwrite the file as user nobody
copy:
dest: /tmp/worldwritable/file.txt
content: "baz"
become: yes
become_user: nobody
register: copy_result7
- name: Assert the file was overwritten
assert:
that:
- "copy_result7.changed"
- "copy_result7.dest == '/tmp/worldwritable/file.txt'"
- "copy_result7.checksum == ('baz'|hash('sha1'))"
- name: Clean up
file:
dest: /tmp/worldwritable
state: absent
remote_user: root
#
# Follow=True tests
#
# test overwriting a link using "follow=yes" so that the link
# is preserved and the link target is updated
- name: Create a test file to symlink to
copy:
dest: "{{ remote_dir }}/follow_test"
content: "this is the follow test file\n"
- name: Create a symlink to the test file
file:
path: "{{ remote_dir }}/follow_link"
src: './follow_test'
state: link
- name: Update the test file using follow=True to preserve the link
copy:
dest: "{{ remote_dir }}/follow_link"
src: foo.txt
follow: yes
register: replace_follow_result
- name: Stat the link path
stat:
path: "{{ remote_dir }}/follow_link"
register: stat_link_result
- name: Assert that the link is still a link and contents were changed
assert:
that:
- stat_link_result['stat']['islnk']
- stat_link_result['stat']['lnk_target'] == './follow_test'
- replace_follow_result['changed']
- "replace_follow_result['checksum'] == remote_file_hash"
# Symlink handling when the dest is already there
# https://github.com/ansible/ansible-modules-core/issues/1568
- name: test idempotency by trying to copy to the symlink with the same contents
copy:
dest: "{{ remote_dir }}/follow_link"
src: foo.txt
follow: yes
register: replace_follow_result
- name: Stat the link path
stat:
path: "{{ remote_dir }}/follow_link"
register: stat_link_result
- name: Assert that the link is still a link and contents were changed
assert:
that:
- stat_link_result['stat']['islnk']
- stat_link_result['stat']['lnk_target'] == './follow_test'
- not replace_follow_result['changed']
- replace_follow_result['checksum'] == remote_file_hash
- name: Update the test file using follow=False to overwrite the link
copy:
dest: '{{ remote_dir }}/follow_link'
content: 'modified'
follow: False
register: copy_results
- name: Check the stat results of the file
stat:
path: '{{remote_dir}}/follow_link'
register: stat_results
- debug:
var: stat_results
verbosity: 1
- name: Assert that the file has changed and is not a link
assert:
that:
- "copy_results is changed"
- "'content' not in copy_results"
- "stat_results.stat.checksum == ('modified'|hash('sha1'))"
- "not stat_results.stat.islnk"
# test overwriting a link using "follow=yes" so that the link
# is preserved and the link target is updated when the thing being copied is a link
#
# File mode tests
#
- name: setup directory for test
file: state=directory dest={{remote_dir }}/directory mode=0755
- name: set file mode when the destination is a directory
copy: src=foo.txt dest={{remote_dir}}/directory/ mode=0705
- name: set file mode when the destination is a directory
copy: src=foo.txt dest={{remote_dir}}/directory/ mode=0604
register: file_result
- name: check that the file has the correct attributes
stat: path={{ remote_dir }}/directory/foo.txt
register: file_attrs
- assert:
that:
- "file_attrs.stat.mode == '0604'"
# The below assertions make an invalid assumption, these were not explicitly set
# - "file_attrs.stat.uid == 0"
# - "file_attrs.stat.pw_name == 'root'"
- name: check that the containing directory did not change attributes
stat: path={{ remote_dir }}/directory/
register: dir_attrs
- assert:
that:
- "dir_attrs.stat.mode == '0755'"
# Test that recursive copy of a directory containing a symlink to another
# directory, with mode=preserve and local_follow=no works.
# See: https://github.com/ansible/ansible/issues/68471
- name: Test recursive copy of dir with symlinks, mode=preserve, local_follow=False
copy:
src: '{{ role_path }}/files/subdir/'
dest: '{{ local_temp_dir }}/preserve_symlink/'
mode: preserve
local_follow: no
- name: check that we actually used and still have a symlink
stat: path={{ local_temp_dir }}/preserve_symlink/subdir1/bar.txt
register: symlink_path
- assert:
that:
- symlink_path.stat.exists
- symlink_path.stat.islnk
#
# I believe the below section is now covered in the recursive copying section.
# Hold on for now as an original test case but delete once confirmed that
# everything is passing
#
# Recursive copying with symlinks tests
#
- delegate_to: localhost
block:
- name: Create a test dir to copy
file:
path: '{{ local_temp_dir }}/top_dir'
state: directory
- name: Create a test dir to symlink to
file:
path: '{{ local_temp_dir }}/linked_dir'
state: directory
- name: Create a file in the test dir
copy:
dest: '{{ local_temp_dir }}/linked_dir/file1'
content: 'hello world'
- name: Create a link to the test dir
file:
path: '{{ local_temp_dir }}/top_dir/follow_link_dir'
src: '{{ local_temp_dir }}/linked_dir'
state: link
- name: Create a circular subdir
file:
path: '{{ local_temp_dir }}/top_dir/subdir'
state: directory
### FIXME: Also add a test for a relative symlink
- name: Create a circular symlink
file:
path: '{{ local_temp_dir }}/top_dir/subdir/circle'
src: '{{ local_temp_dir }}/top_dir/'
state: link
- name: Copy the directory's link
copy:
src: '{{ local_temp_dir }}/top_dir'
dest: '{{ remote_dir }}/new_dir'
local_follow: True
- name: Stat the copied path
stat:
path: '{{ remote_dir }}/new_dir/top_dir/follow_link_dir'
register: stat_dir_result
- name: Stat the copied file
stat:
path: '{{ remote_dir }}/new_dir/top_dir/follow_link_dir/file1'
register: stat_file_in_dir_result
- name: Stat the circular symlink
stat:
path: '{{ remote_dir }}/new_dir/top_dir/subdir/circle'
register: stat_circular_symlink_result
- name: Assert that the directory exists
assert:
that:
- stat_dir_result.stat.exists
- stat_dir_result.stat.isdir
- stat_file_in_dir_result.stat.exists
- stat_file_in_dir_result.stat.isreg
- stat_circular_symlink_result.stat.exists
- stat_circular_symlink_result.stat.islnk
# Relative paths in dest:
- name: Smoketest that copying content to an implicit relative path works
copy:
content: 'testing'
dest: 'ansible-testing.txt'
register: relative_results
- name: Assert that copying to an implicit relative path reported changed
assert:
that:
- 'relative_results["changed"]'
- 'relative_results["checksum"] == "dc724af18fbdd4e59189f5fe768a5f8311527050"'
- name: Test that copying the same content with an implicit relative path reports no change
copy:
content: 'testing'
dest: 'ansible-testing.txt'
register: relative_results
- name: Assert that copying the same content with an implicit relative path reports no change
assert:
that:
- 'not relative_results["changed"]'
- 'relative_results["checksum"] == "dc724af18fbdd4e59189f5fe768a5f8311527050"'
- name: Test that copying different content with an implicit relative path reports change
copy:
content: 'testing2'
dest: 'ansible-testing.txt'
register: relative_results
- name: Assert that copying different content with an implicit relative path reports changed
assert:
that:
- 'relative_results["changed"]'
- 'relative_results["checksum"] == "596b29ec9afea9e461a20610d150939b9c399d93"'
- name: Smoketest that explicit relative path works
copy:
content: 'testing'
dest: './ansible-testing.txt'
register: relative_results
- name: Assert that explicit relative paths reports change
assert:
that:
- 'relative_results["changed"]'
- 'relative_results["checksum"] == "dc724af18fbdd4e59189f5fe768a5f8311527050"'
- name: Cleanup relative path tests
file:
path: 'ansible-testing.txt'
state: absent
# src is a file, dest is a non-existent directory (2 levels of directories):
# using remote_src
# checks that dest is created
- include_tasks: file=dest_in_non_existent_directories_remote_src.yml
with_items:
- { src: 'foo.txt', dest: 'new_sub_dir1/sub_dir2/', check: 'new_sub_dir1/sub_dir2/foo.txt' }
# src is a file, dest is file in a non-existent directory: checks that a failure occurs
# using remote_src
- include_tasks: file=src_file_dest_file_in_non_existent_dir_remote_src.yml
with_items:
- 'new_sub_dir1/sub_dir2/foo.txt'
- 'new_sub_dir1/foo.txt'
loop_control:
loop_var: 'dest'
# src is a file, dest is a non-existent directory (2 levels of directories):
# checks that dest is created
- include_tasks: file=dest_in_non_existent_directories.yml
with_items:
- { src: 'foo.txt', dest: 'new_sub_dir1/sub_dir2/', check: 'new_sub_dir1/sub_dir2/foo.txt' }
- { src: 'subdir', dest: 'new_sub_dir1/sub_dir2/', check: 'new_sub_dir1/sub_dir2/subdir/bar.txt' }
- { src: 'subdir/', dest: 'new_sub_dir1/sub_dir2/', check: 'new_sub_dir1/sub_dir2/bar.txt' }
- { src: 'subdir', dest: 'new_sub_dir1/sub_dir2', check: 'new_sub_dir1/sub_dir2/subdir/bar.txt' }
- { src: 'subdir/', dest: 'new_sub_dir1/sub_dir2', check: 'new_sub_dir1/sub_dir2/bar.txt' }
# src is a file, dest is file in a non-existent directory: checks that a failure occurs
- include_tasks: file=src_file_dest_file_in_non_existent_dir.yml
with_items:
- 'new_sub_dir1/sub_dir2/foo.txt'
- 'new_sub_dir1/foo.txt'
loop_control:
loop_var: 'dest'
#
# Recursive copying on remote host
#
## prepare for test
- block:
- name: execute - Create a test src dir
file:
path: '{{ remote_dir }}/remote_dir_src'
state: directory
- name: gather - Stat the remote_dir_src
stat:
path: '{{ remote_dir }}/remote_dir_src'
register: stat_remote_dir_src_before
- name: execute - Create a subdir
file:
path: '{{ remote_dir }}/remote_dir_src/subdir'
state: directory
- name: gather - Stat the remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/remote_dir_src/subdir'
register: stat_remote_dir_src_subdir_before
- name: execute - Create a file in the top of src
copy:
dest: '{{ remote_dir }}/remote_dir_src/file1'
content: 'hello world 1'
- name: gather - Stat the remote_dir_src/file1
stat:
path: '{{ remote_dir }}/remote_dir_src/file1'
register: stat_remote_dir_src_file1_before
- name: execute - Create a file in the subdir
copy:
dest: '{{ remote_dir }}/remote_dir_src/subdir/file12'
content: 'hello world 12'
- name: gather - Stat the remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/remote_dir_src/subdir/file12'
register: stat_remote_dir_src_subdir_file12_before
- name: execute - Create a link to the file12
file:
path: '{{ remote_dir }}/remote_dir_src/link_file12'
src: '{{ remote_dir }}/remote_dir_src/subdir/file12'
state: link
- name: gather - Stat the remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/remote_dir_src/link_file12'
register: stat_remote_dir_src_link_file12_before
### test when src endswith os.sep and dest isdir
- block:
### local_follow: True
- name: execute - Create a test dest dir
file:
path: '{{ remote_dir }}/testcase1_local_follow_true'
state: directory
- name: execute - Copy the directory on remote with local_follow True
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase1_local_follow_true'
local_follow: True
register: testcase1
- name: gather - Stat the testcase1_local_follow_true
stat:
path: '{{ remote_dir }}/testcase1_local_follow_true'
register: stat_testcase1_local_follow_true
- name: gather - Stat the testcase1_local_follow_true/subdir
stat:
path: '{{ remote_dir }}/testcase1_local_follow_true/subdir'
register: stat_testcase1_local_follow_true_subdir
- name: gather - Stat the testcase1_local_follow_true/file1
stat:
path: '{{ remote_dir }}/testcase1_local_follow_true/file1'
register: stat_testcase1_local_follow_true_file1
- name: gather - Stat the testcase1_local_follow_true/subdir/file12
stat:
path: '{{ remote_dir }}/testcase1_local_follow_true/subdir/file12'
register: stat_testcase1_local_follow_true_subdir_file12
- name: gather - Stat the testcase1_local_follow_true/link_file12
stat:
path: '{{ remote_dir }}/testcase1_local_follow_true/link_file12'
register: stat_testcase1_local_follow_true_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase1 is changed
- "stat_testcase1_local_follow_true.stat.isdir"
- "stat_testcase1_local_follow_true_subdir.stat.isdir"
- "stat_testcase1_local_follow_true_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase1_local_follow_true_file1.stat.checksum"
- "stat_testcase1_local_follow_true_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase1_local_follow_true_subdir_file12.stat.checksum"
- "stat_testcase1_local_follow_true_link_file12.stat.exists"
- "not stat_testcase1_local_follow_true_link_file12.stat.islnk"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase1_local_follow_true_link_file12.stat.checksum"
### local_follow: False
- name: execute - Create a test dest dir
file:
path: '{{ remote_dir }}/testcase1_local_follow_false'
state: directory
- name: execute - Copy the directory on remote with local_follow False
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase1_local_follow_false'
local_follow: False
register: testcase1
- name: gather - Stat the testcase1_local_follow_false
stat:
path: '{{ remote_dir }}/testcase1_local_follow_false'
register: stat_testcase1_local_follow_false
- name: gather - Stat the testcase1_local_follow_false/subdir
stat:
path: '{{ remote_dir }}/testcase1_local_follow_false/subdir'
register: stat_testcase1_local_follow_false_subdir
- name: gather - Stat the testcase1_local_follow_false/file1
stat:
path: '{{ remote_dir }}/testcase1_local_follow_false/file1'
register: stat_testcase1_local_follow_false_file1
- name: gather - Stat the testcase1_local_follow_false/subdir/file12
stat:
path: '{{ remote_dir }}/testcase1_local_follow_false/subdir/file12'
register: stat_testcase1_local_follow_false_subdir_file12
- name: gather - Stat the testcase1_local_follow_false/link_file12
stat:
path: '{{ remote_dir }}/testcase1_local_follow_false/link_file12'
register: stat_testcase1_local_follow_false_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase1 is changed
- "stat_testcase1_local_follow_false.stat.isdir"
- "stat_testcase1_local_follow_false_subdir.stat.isdir"
- "stat_testcase1_local_follow_false_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase1_local_follow_false_file1.stat.checksum"
- "stat_testcase1_local_follow_false_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase1_local_follow_false_subdir_file12.stat.checksum"
- "stat_testcase1_local_follow_false_link_file12.stat.exists"
- "stat_testcase1_local_follow_false_link_file12.stat.islnk"
## test when src endswith os.sep and dest not exists
- block:
- name: execute - Copy the directory on remote with local_follow True
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase2_local_follow_true'
local_follow: True
register: testcase2
- name: gather - Stat the testcase2_local_follow_true
stat:
path: '{{ remote_dir }}/testcase2_local_follow_true'
register: stat_testcase2_local_follow_true
- name: gather - Stat the testcase2_local_follow_true/subdir
stat:
path: '{{ remote_dir }}/testcase2_local_follow_true/subdir'
register: stat_testcase2_local_follow_true_subdir
- name: gather - Stat the testcase2_local_follow_true/file1
stat:
path: '{{ remote_dir }}/testcase2_local_follow_true/file1'
register: stat_testcase2_local_follow_true_file1
- name: gather - Stat the testcase2_local_follow_true/subdir/file12
stat:
path: '{{ remote_dir }}/testcase2_local_follow_true/subdir/file12'
register: stat_testcase2_local_follow_true_subdir_file12
- name: gather - Stat the testcase2_local_follow_true/link_file12
stat:
path: '{{ remote_dir }}/testcase2_local_follow_true/link_file12'
register: stat_testcase2_local_follow_true_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase2 is changed
- "stat_testcase2_local_follow_true.stat.isdir"
- "stat_testcase2_local_follow_true_subdir.stat.isdir"
- "stat_testcase2_local_follow_true_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase2_local_follow_true_file1.stat.checksum"
- "stat_testcase2_local_follow_true_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase2_local_follow_true_subdir_file12.stat.checksum"
- "stat_testcase2_local_follow_true_link_file12.stat.exists"
- "not stat_testcase2_local_follow_true_link_file12.stat.islnk"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase2_local_follow_true_link_file12.stat.checksum"
### local_follow: False
- name: execute - Copy the directory on remote with local_follow False
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase2_local_follow_false'
local_follow: False
register: testcase2
- name: execute - Copy the directory on remote with local_follow False
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase2_local_follow_false'
local_follow: False
register: testcase1
- name: gather - Stat the testcase2_local_follow_false
stat:
path: '{{ remote_dir }}/testcase2_local_follow_false'
register: stat_testcase2_local_follow_false
- name: gather - Stat the testcase2_local_follow_false/subdir
stat:
path: '{{ remote_dir }}/testcase2_local_follow_false/subdir'
register: stat_testcase2_local_follow_false_subdir
- name: gather - Stat the testcase2_local_follow_false/file1
stat:
path: '{{ remote_dir }}/testcase2_local_follow_false/file1'
register: stat_testcase2_local_follow_false_file1
- name: gather - Stat the testcase2_local_follow_false/subdir/file12
stat:
path: '{{ remote_dir }}/testcase2_local_follow_false/subdir/file12'
register: stat_testcase2_local_follow_false_subdir_file12
- name: gather - Stat the testcase2_local_follow_false/link_file12
stat:
path: '{{ remote_dir }}/testcase2_local_follow_false/link_file12'
register: stat_testcase2_local_follow_false_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase2 is changed
- "stat_testcase2_local_follow_false.stat.isdir"
- "stat_testcase2_local_follow_false_subdir.stat.isdir"
- "stat_testcase2_local_follow_false_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase2_local_follow_false_file1.stat.checksum"
- "stat_testcase2_local_follow_false_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase2_local_follow_false_subdir_file12.stat.checksum"
- "stat_testcase2_local_follow_false_link_file12.stat.exists"
- "stat_testcase2_local_follow_false_link_file12.stat.islnk"
## test when src not endswith os.sep and dest isdir
- block:
### local_follow: True
- name: execute - Create a test dest dir
file:
path: '{{ remote_dir }}/testcase3_local_follow_true'
state: directory
- name: execute - Copy the directory on remote with local_follow True
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src'
dest: '{{ remote_dir }}/testcase3_local_follow_true'
local_follow: True
register: testcase3
- name: gather - Stat the testcase3_local_follow_true
stat:
path: '{{ remote_dir }}/testcase3_local_follow_true/remote_dir_src'
register: stat_testcase3_local_follow_true_remote_dir_src
- name: gather - Stat the testcase3_local_follow_true/remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/testcase3_local_follow_true/remote_dir_src/subdir'
register: stat_testcase3_local_follow_true_remote_dir_src_subdir
- name: gather - Stat the testcase3_local_follow_true/remote_dir_src/file1
stat:
path: '{{ remote_dir }}/testcase3_local_follow_true/remote_dir_src/file1'
register: stat_testcase3_local_follow_true_remote_dir_src_file1
- name: gather - Stat the testcase3_local_follow_true/remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/testcase3_local_follow_true/remote_dir_src/subdir/file12'
register: stat_testcase3_local_follow_true_remote_dir_src_subdir_file12
- name: gather - Stat the testcase3_local_follow_true/remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/testcase3_local_follow_true/remote_dir_src/link_file12'
register: stat_testcase3_local_follow_true_remote_dir_src_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase3 is changed
- "stat_testcase3_local_follow_true_remote_dir_src.stat.isdir"
- "stat_testcase3_local_follow_true_remote_dir_src_subdir.stat.isdir"
- "stat_testcase3_local_follow_true_remote_dir_src_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase3_local_follow_true_remote_dir_src_file1.stat.checksum"
- "stat_testcase3_local_follow_true_remote_dir_src_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase3_local_follow_true_remote_dir_src_subdir_file12.stat.checksum"
- "stat_testcase3_local_follow_true_remote_dir_src_link_file12.stat.exists"
- "not stat_testcase3_local_follow_true_remote_dir_src_link_file12.stat.islnk"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase3_local_follow_true_remote_dir_src_link_file12.stat.checksum"
### local_follow: False
- name: execute - Create a test dest dir
file:
path: '{{ remote_dir }}/testcase3_local_follow_false'
state: directory
- name: execute - Copy the directory on remote with local_follow False
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src'
dest: '{{ remote_dir }}/testcase3_local_follow_false'
local_follow: False
register: testcase3
- name: gather - Stat the testcase3_local_follow_false
stat:
path: '{{ remote_dir }}/testcase3_local_follow_false/remote_dir_src'
register: stat_testcase3_local_follow_false_remote_dir_src
- name: gather - Stat the testcase3_local_follow_false/remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/testcase3_local_follow_false/remote_dir_src/subdir'
register: stat_testcase3_local_follow_false_remote_dir_src_subdir
- name: gather - Stat the testcase3_local_follow_false/remote_dir_src/file1
stat:
path: '{{ remote_dir }}/testcase3_local_follow_false/remote_dir_src/file1'
register: stat_testcase3_local_follow_false_remote_dir_src_file1
- name: gather - Stat the testcase3_local_follow_false/remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/testcase3_local_follow_false/remote_dir_src/subdir/file12'
register: stat_testcase3_local_follow_false_remote_dir_src_subdir_file12
- name: gather - Stat the testcase3_local_follow_false/remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/testcase3_local_follow_false/remote_dir_src/link_file12'
register: stat_testcase3_local_follow_false_remote_dir_src_link_file12
- name: assert - remote_dir_src has copied with local_follow False.
assert:
that:
- testcase3 is changed
- "stat_testcase3_local_follow_false_remote_dir_src.stat.isdir"
- "stat_testcase3_local_follow_false_remote_dir_src_subdir.stat.isdir"
- "stat_testcase3_local_follow_false_remote_dir_src_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase3_local_follow_false_remote_dir_src_file1.stat.checksum"
- "stat_testcase3_local_follow_false_remote_dir_src_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase3_local_follow_false_remote_dir_src_subdir_file12.stat.checksum"
- "stat_testcase3_local_follow_false_remote_dir_src_link_file12.stat.exists"
- "stat_testcase3_local_follow_false_remote_dir_src_link_file12.stat.islnk"
## test when src not endswith os.sep and dest not exists
- block:
### local_follow: True
- name: execute - Copy the directory on remote with local_follow True
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src'
dest: '{{ remote_dir }}/testcase4_local_follow_true'
local_follow: True
register: testcase4
- name: gather - Stat the testcase4_local_follow_true
stat:
path: '{{ remote_dir }}/testcase4_local_follow_true/remote_dir_src'
register: stat_testcase4_local_follow_true_remote_dir_src
- name: gather - Stat the testcase4_local_follow_true/remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/testcase4_local_follow_true/remote_dir_src/subdir'
register: stat_testcase4_local_follow_true_remote_dir_src_subdir
- name: gather - Stat the testcase4_local_follow_true/remote_dir_src/file1
stat:
path: '{{ remote_dir }}/testcase4_local_follow_true/remote_dir_src/file1'
register: stat_testcase4_local_follow_true_remote_dir_src_file1
- name: gather - Stat the testcase4_local_follow_true/remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/testcase4_local_follow_true/remote_dir_src/subdir/file12'
register: stat_testcase4_local_follow_true_remote_dir_src_subdir_file12
- name: gather - Stat the testcase4_local_follow_true/remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/testcase4_local_follow_true/remote_dir_src/link_file12'
register: stat_testcase4_local_follow_true_remote_dir_src_link_file12
- name: assert - remote_dir_src has copied with local_follow True.
assert:
that:
- testcase4 is changed
- "stat_testcase4_local_follow_true_remote_dir_src.stat.isdir"
- "stat_testcase4_local_follow_true_remote_dir_src_subdir.stat.isdir"
- "stat_testcase4_local_follow_true_remote_dir_src_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase4_local_follow_true_remote_dir_src_file1.stat.checksum"
- "stat_testcase4_local_follow_true_remote_dir_src_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase4_local_follow_true_remote_dir_src_subdir_file12.stat.checksum"
- "stat_testcase4_local_follow_true_remote_dir_src_link_file12.stat.exists"
- "not stat_testcase4_local_follow_true_remote_dir_src_link_file12.stat.islnk"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase4_local_follow_true_remote_dir_src_link_file12.stat.checksum"
### local_follow: False
- name: execute - Copy the directory on remote with local_follow False
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src'
dest: '{{ remote_dir }}/testcase4_local_follow_false'
local_follow: False
register: testcase4
- name: gather - Stat the testcase4_local_follow_false
stat:
path: '{{ remote_dir }}/testcase4_local_follow_false/remote_dir_src'
register: stat_testcase4_local_follow_false_remote_dir_src
- name: gather - Stat the testcase4_local_follow_false/remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/testcase4_local_follow_false/remote_dir_src/subdir'
register: stat_testcase4_local_follow_false_remote_dir_src_subdir
- name: gather - Stat the testcase4_local_follow_false/remote_dir_src/file1
stat:
path: '{{ remote_dir }}/testcase4_local_follow_false/remote_dir_src/file1'
register: stat_testcase4_local_follow_false_remote_dir_src_file1
- name: gather - Stat the testcase4_local_follow_false/remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/testcase4_local_follow_false/remote_dir_src/subdir/file12'
register: stat_testcase4_local_follow_false_remote_dir_src_subdir_file12
- name: gather - Stat the testcase4_local_follow_false/remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/testcase4_local_follow_false/remote_dir_src/link_file12'
register: stat_testcase4_local_follow_false_remote_dir_src_link_file12
- name: assert - remote_dir_src has copied with local_follow False.
assert:
that:
- testcase4 is changed
- "stat_testcase4_local_follow_false_remote_dir_src.stat.isdir"
- "stat_testcase4_local_follow_false_remote_dir_src_subdir.stat.isdir"
- "stat_testcase4_local_follow_false_remote_dir_src_file1.stat.exists"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_testcase4_local_follow_false_remote_dir_src_file1.stat.checksum"
- "stat_testcase4_local_follow_false_remote_dir_src_subdir_file12.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_testcase4_local_follow_false_remote_dir_src_subdir_file12.stat.checksum"
- "stat_testcase4_local_follow_false_remote_dir_src_link_file12.stat.exists"
- "stat_testcase4_local_follow_false_remote_dir_src_link_file12.stat.islnk"
- block:
- name: execute - Clone the source directory on remote
copy:
remote_src: True
src: '{{ remote_dir }}/remote_dir_src/'
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_src'
- name: Create a 2nd level subdirectory
file:
path: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/subdir/subdir2/'
state: directory
- name: execute - Copy the directory on remote
copy:
remote_src: True
src: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/'
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_dest'
local_follow: True
- name: execute - Create a new file in the subdir
copy:
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/subdir/subdir2/file13'
content: 'very new file'
- name: gather - Stat the testcase5_remote_src_subdirs_src/subdir/subdir2/file13
stat:
path: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/subdir/subdir2/file13'
- name: execute - Copy the directory on remote
copy:
remote_src: True
src: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/'
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_dest/'
register: testcase5_new
- name: execute - Edit a file in the subdir
copy:
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/subdir/subdir2/file13'
content: 'NOT hello world 12'
- name: gather - Stat the testcase5_remote_src_subdirs_src/subdir/subdir2/file13
stat:
path: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/subdir/subdir2/file13'
register: stat_testcase5_remote_src_subdirs_file13_before
- name: execute - Copy the directory on remote
copy:
remote_src: True
src: '{{ remote_dir }}/testcase5_remote_src_subdirs_src/'
dest: '{{ remote_dir }}/testcase5_remote_src_subdirs_dest/'
register: testcase5_edited
- name: gather - Stat the testcase5_remote_src_subdirs_dest/subdir/subdir2/file13
stat:
path: '{{ remote_dir }}/testcase5_remote_src_subdirs_dest/subdir/subdir2/file13'
register: stat_testcase5_remote_src_subdirs_file13
- name: assert - remote_dir_src has copied with local_follow False.
assert:
that:
- testcase5_new is changed
- testcase5_edited is changed
- "stat_testcase5_remote_src_subdirs_file13.stat.exists"
- "stat_testcase5_remote_src_subdirs_file13_before.stat.checksum == stat_testcase5_remote_src_subdirs_file13.stat.checksum"
## test copying the directory on remote with chown
- name: setting 'ansible_copy_test_user_name' outside block since 'always' section depends on this also
set_fact:
ansible_copy_test_user_name: 'ansible_copy_test_{{ 100000 | random }}'
- block:
- name: execute - create a user for test
user:
name: '{{ ansible_copy_test_user_name }}'
state: present
become: true
register: ansible_copy_test_user
- name: execute - create a group for test
group:
name: '{{ ansible_copy_test_user_name }}'
state: present
become: true
register: ansible_copy_test_group
- name: execute - Copy the directory on remote with chown
copy:
remote_src: True
src: '{{ remote_dir_expanded }}/remote_dir_src/'
dest: '{{ remote_dir_expanded }}/new_dir_with_chown'
owner: '{{ ansible_copy_test_user_name }}'
group: '{{ ansible_copy_test_user_name }}'
follow: true
register: testcase5
become: true
- name: gather - Stat the new_dir_with_chown
stat:
path: '{{ remote_dir }}/new_dir_with_chown'
register: stat_new_dir_with_chown
- name: gather - Stat the new_dir_with_chown/file1
stat:
path: '{{ remote_dir }}/new_dir_with_chown/file1'
register: stat_new_dir_with_chown_file1
- name: gather - Stat the new_dir_with_chown/subdir
stat:
path: '{{ remote_dir }}/new_dir_with_chown/subdir'
register: stat_new_dir_with_chown_subdir
- name: gather - Stat the new_dir_with_chown/subdir/file12
stat:
path: '{{ remote_dir }}/new_dir_with_chown/subdir/file12'
register: stat_new_dir_with_chown_subdir_file12
- name: gather - Stat the new_dir_with_chown/link_file12
stat:
path: '{{ remote_dir }}/new_dir_with_chown/link_file12'
register: stat_new_dir_with_chown_link_file12
- name: assert - owner and group have changed
assert:
that:
- testcase5 is changed
- "stat_new_dir_with_chown.stat.uid == {{ ansible_copy_test_user.uid }}"
- "stat_new_dir_with_chown.stat.gid == {{ ansible_copy_test_group.gid }}"
- "stat_new_dir_with_chown.stat.pw_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown.stat.gr_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_file1.stat.uid == {{ ansible_copy_test_user.uid }}"
- "stat_new_dir_with_chown_file1.stat.gid == {{ ansible_copy_test_group.gid }}"
- "stat_new_dir_with_chown_file1.stat.pw_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_file1.stat.gr_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_subdir.stat.uid == {{ ansible_copy_test_user.uid }}"
- "stat_new_dir_with_chown_subdir.stat.gid == {{ ansible_copy_test_group.gid }}"
- "stat_new_dir_with_chown_subdir.stat.pw_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_subdir.stat.gr_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_subdir_file12.stat.uid == {{ ansible_copy_test_user.uid }}"
- "stat_new_dir_with_chown_subdir_file12.stat.gid == {{ ansible_copy_test_group.gid }}"
- "stat_new_dir_with_chown_subdir_file12.stat.pw_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_subdir_file12.stat.gr_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_link_file12.stat.uid == {{ ansible_copy_test_user.uid }}"
- "stat_new_dir_with_chown_link_file12.stat.gid == {{ ansible_copy_test_group.gid }}"
- "stat_new_dir_with_chown_link_file12.stat.pw_name == '{{ ansible_copy_test_user_name }}'"
- "stat_new_dir_with_chown_link_file12.stat.gr_name == '{{ ansible_copy_test_user_name }}'"
always:
- name: execute - remove the user for test
user:
name: '{{ ansible_copy_test_user_name }}'
state: absent
remove: yes
become: true
- name: execute - remove the group for test
group:
name: '{{ ansible_copy_test_user_name }}'
state: absent
become: true
## testcase last - make sure remote_dir_src not change
- block:
- name: Stat the remote_dir_src
stat:
path: '{{ remote_dir }}/remote_dir_src'
register: stat_remote_dir_src_after
- name: Stat the remote_dir_src/subdir
stat:
path: '{{ remote_dir }}/remote_dir_src/subdir'
register: stat_remote_dir_src_subdir_after
- name: Stat the remote_dir_src/file1
stat:
path: '{{ remote_dir }}/remote_dir_src/file1'
register: stat_remote_dir_src_file1_after
- name: Stat the remote_dir_src/subdir/file12
stat:
path: '{{ remote_dir }}/remote_dir_src/subdir/file12'
register: stat_remote_dir_src_subdir_file12_after
- name: Stat the remote_dir_src/link_file12
stat:
path: '{{ remote_dir }}/remote_dir_src/link_file12'
register: stat_remote_dir_src_link_file12_after
- name: Assert that remote_dir_src not change.
assert:
that:
- "stat_remote_dir_src_after.stat.exists"
- "stat_remote_dir_src_after.stat.isdir"
- "stat_remote_dir_src_before.stat.uid == stat_remote_dir_src_after.stat.uid"
- "stat_remote_dir_src_before.stat.gid == stat_remote_dir_src_after.stat.gid"
- "stat_remote_dir_src_before.stat.pw_name == stat_remote_dir_src_after.stat.pw_name"
- "stat_remote_dir_src_before.stat.gr_name == stat_remote_dir_src_after.stat.gr_name"
- "stat_remote_dir_src_before.stat.path == stat_remote_dir_src_after.stat.path"
- "stat_remote_dir_src_before.stat.mode == stat_remote_dir_src_after.stat.mode"
- "stat_remote_dir_src_subdir_after.stat.exists"
- "stat_remote_dir_src_subdir_after.stat.isdir"
- "stat_remote_dir_src_subdir_before.stat.uid == stat_remote_dir_src_subdir_after.stat.uid"
- "stat_remote_dir_src_subdir_before.stat.gid == stat_remote_dir_src_subdir_after.stat.gid"
- "stat_remote_dir_src_subdir_before.stat.pw_name == stat_remote_dir_src_subdir_after.stat.pw_name"
- "stat_remote_dir_src_subdir_before.stat.gr_name == stat_remote_dir_src_subdir_after.stat.gr_name"
- "stat_remote_dir_src_subdir_before.stat.path == stat_remote_dir_src_subdir_after.stat.path"
- "stat_remote_dir_src_subdir_before.stat.mode == stat_remote_dir_src_subdir_after.stat.mode"
- "stat_remote_dir_src_file1_after.stat.exists"
- "stat_remote_dir_src_file1_before.stat.uid == stat_remote_dir_src_file1_after.stat.uid"
- "stat_remote_dir_src_file1_before.stat.gid == stat_remote_dir_src_file1_after.stat.gid"
- "stat_remote_dir_src_file1_before.stat.pw_name == stat_remote_dir_src_file1_after.stat.pw_name"
- "stat_remote_dir_src_file1_before.stat.gr_name == stat_remote_dir_src_file1_after.stat.gr_name"
- "stat_remote_dir_src_file1_before.stat.path == stat_remote_dir_src_file1_after.stat.path"
- "stat_remote_dir_src_file1_before.stat.mode == stat_remote_dir_src_file1_after.stat.mode"
- "stat_remote_dir_src_file1_before.stat.checksum == stat_remote_dir_src_file1_after.stat.checksum"
- "stat_remote_dir_src_subdir_file12_after.stat.exists"
- "stat_remote_dir_src_subdir_file12_before.stat.uid == stat_remote_dir_src_subdir_file12_after.stat.uid"
- "stat_remote_dir_src_subdir_file12_before.stat.gid == stat_remote_dir_src_subdir_file12_after.stat.gid"
- "stat_remote_dir_src_subdir_file12_before.stat.pw_name == stat_remote_dir_src_subdir_file12_after.stat.pw_name"
- "stat_remote_dir_src_subdir_file12_before.stat.gr_name == stat_remote_dir_src_subdir_file12_after.stat.gr_name"
- "stat_remote_dir_src_subdir_file12_before.stat.path == stat_remote_dir_src_subdir_file12_after.stat.path"
- "stat_remote_dir_src_subdir_file12_before.stat.mode == stat_remote_dir_src_subdir_file12_after.stat.mode"
- "stat_remote_dir_src_subdir_file12_before.stat.checksum == stat_remote_dir_src_subdir_file12_after.stat.checksum"
- "stat_remote_dir_src_link_file12_after.stat.exists"
- "stat_remote_dir_src_link_file12_after.stat.islnk"
- "stat_remote_dir_src_link_file12_before.stat.uid == stat_remote_dir_src_link_file12_after.stat.uid"
- "stat_remote_dir_src_link_file12_before.stat.gid == stat_remote_dir_src_link_file12_after.stat.gid"
- "stat_remote_dir_src_link_file12_before.stat.pw_name == stat_remote_dir_src_link_file12_after.stat.pw_name"
- "stat_remote_dir_src_link_file12_before.stat.gr_name == stat_remote_dir_src_link_file12_after.stat.gr_name"
- "stat_remote_dir_src_link_file12_before.stat.path == stat_remote_dir_src_link_file12_after.stat.path"
- "stat_remote_dir_src_link_file12_before.stat.mode == stat_remote_dir_src_link_file12_after.stat.mode"
# Test for issue 69783: copy with remote_src=yes and src='dir/' preserves all permissions
- block:
- name: Create directory structure
file:
path: "{{ local_temp_dir }}/test69783/{{ item }}"
state: directory
loop:
- "src/dir"
- "dest"
- name: Create source file structure
file:
path: "{{ local_temp_dir }}/test69783/src/{{ item.name }}"
state: touch
mode: "{{ item.mode }}"
loop:
- { name: 'readwrite', mode: '0644' }
- { name: 'executable', mode: '0755' }
- { name: 'readonly', mode: '0444' }
- { name: 'dir/readwrite', mode: '0644' }
- { name: 'dir/executable', mode: '0755' }
- { name: 'dir/readonly', mode: '0444' }
- name: Recursive remote copy with preserve
copy:
src: "{{ local_temp_dir }}/test69783/src/"
dest: "{{ local_temp_dir }}/test69783/dest/"
remote_src: yes
mode: preserve
- name: Stat dest 'readwrite' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/readwrite"
register: dest_readwrite_stat
- name: Stat dest 'executable' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/executable"
register: dest_executable_stat
- name: Stat dest 'readonly' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/readonly"
register: dest_readonly_stat
- name: Stat dest 'dir/readwrite' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/dir/readwrite"
register: dest_dir_readwrite_stat
- name: Stat dest 'dir/executable' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/dir/executable"
register: dest_dir_executable_stat
- name: Stat dest 'dir/readonly' file
stat:
path: "{{ local_temp_dir}}/test69783/dest/dir/readonly"
register: dest_dir_readonly_stat
- name: Assert modes are preserved
assert:
that:
- "dest_readwrite_stat.stat.mode == '0644'"
- "dest_executable_stat.stat.mode == '0755'"
- "dest_readonly_stat.stat.mode == '0444'"
- "dest_dir_readwrite_stat.stat.mode == '0644'"
- "dest_dir_executable_stat.stat.mode == '0755'"
- "dest_dir_readonly_stat.stat.mode == '0444'"
- name: fail to copy an encrypted file without the password set
copy:
src: '{{role_path}}/files-different/vault/vault-file'
dest: '{{remote_tmp_dir}}/copy/file'
register: fail_copy_encrypted_file
ignore_errors: yes # weird failed_when doesn't work in this case
- name: assert failure message when copying an encrypted file without the password set
assert:
that:
- fail_copy_encrypted_file is failed
- fail_copy_encrypted_file.msg == 'A vault password or secret must be specified to decrypt {{role_path}}/files-different/vault/vault-file'
- name: fail to copy a directory with an encrypted file without the password
copy:
src: '{{role_path}}/files-different/vault'
dest: '{{remote_tmp_dir}}/copy'
register: fail_copy_directory_with_enc_file
ignore_errors: yes
- name: assert failure message when copying a directory that contains an encrypted file without the password set
assert:
that:
- fail_copy_directory_with_enc_file is failed
- fail_copy_directory_with_enc_file.msg == 'A vault password or secret must be specified to decrypt {{role_path}}/files-different/vault/vault-file'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,580 |
Support empty string defaults for configuration settings
|
### Summary
As of https://github.com/ansible/ansible/pull/74523, using empty string defaults for config options in `lib/ansible/config/base.yml` causes the error: `WARNING: Inline literal start-string without end-string.`
The two configuration settings defaulting to empty strings were replaced with ~ (YAML equivalent to Python's None) for the time being and in this case it made no functional difference. Since the configuration settings are used both for documentation and in the code (and None != ''), empty string defaults should still be valid.
On a semi-related note, the file contains `default: null`, `default: ~`, ~`default:`~ (removed these in #74607), and settings that omit the default field, all of which are functionally equivalent but not displayed the same way.
### Issue Type
Documentation Report
### Component Name
docs/templates/config.rst.j2
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0]
```
### Configuration
```console
$ ansible-config dump --only-changed
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74580
|
https://github.com/ansible/ansible/pull/77733
|
e208fe59329a45966d23f28bd92c0ee5592ac71b
|
eecc4046e879ab4867a973c21b92de3f629eb49d
| 2021-05-05T17:24:41Z |
python
| 2022-10-07T19:27:17Z |
docs/templates/config.rst.j2
|
.. _ansible_configuration_settings:
{% set name = 'Ansible Configuration Settings' -%}
{% set name_slug = 'config' -%}
{% set name_len = name|length + 0-%}
{{ '=' * name_len }}
{{name}}
{{ '=' * name_len }}
Ansible supports several sources for configuring its behavior, including an ini file named ``ansible.cfg``, environment variables, command-line options, playbook keywords, and variables. See :ref:`general_precedence_rules` for details on the relative precedence of each source.
The ``ansible-config`` utility allows users to see all the configuration settings available, their defaults, how to set them and
where their current value comes from. See :ref:`ansible-config` for more information.
.. _ansible_configuration_settings_locations:
The configuration file
======================
Changes can be made and used in a configuration file which will be searched for in the following order:
* ``ANSIBLE_CONFIG`` (environment variable if set)
* ``ansible.cfg`` (in the current directory)
* ``~/.ansible.cfg`` (in the home directory)
* ``/etc/ansible/ansible.cfg``
Ansible will process the above list and use the first file found, all others are ignored.
.. note::
The configuration file is one variant of an INI format.
Both the hash sign (``#``) and semicolon (``;``) are allowed as
comment markers when the comment starts the line.
However, if the comment is inline with regular values,
only the semicolon is allowed to introduce the comment.
For instance::
# some basic default values...
inventory = /etc/ansible/hosts ; This points to the file that lists your hosts
.. _generating_ansible.cfg:
Generating a sample ``ansible.cfg`` file
-----------------------------------------
You can generate a fully commented-out example ``ansible.cfg`` file, for example::
$ ansible-config init --disabled > ansible.cfg
You can also have a more complete file that includes existing plugins::
$ ansible-config init --disabled -t all > ansible.cfg
You can use these as starting points to create your own ``ansible.cfg`` file.
.. _cfg_in_world_writable_dir:
Avoiding security risks with ``ansible.cfg`` in the current directory
---------------------------------------------------------------------
If Ansible were to load ``ansible.cfg`` from a world-writable current working
directory, it would create a serious security risk. Another user could place
their own config file there, designed to make Ansible run malicious code both
locally and remotely, possibly with elevated privileges. For this reason,
Ansible will not automatically load a config file from the current working
directory if the directory is world-writable.
If you depend on using Ansible with a config file in the current working
directory, the best way to avoid this problem is to restrict access to your
Ansible directories to particular user(s) and/or group(s). If your Ansible
directories live on a filesystem which has to emulate Unix permissions, like
Vagrant or Windows Subsystem for Linux (WSL), you may, at first, not know how
you can fix this as ``chmod``, ``chown``, and ``chgrp`` might not work there.
In most of those cases, the correct fix is to modify the mount options of the
filesystem so the files and directories are readable and writable by the users
and groups running Ansible but closed to others. For more details on the
correct settings, see:
* for Vagrant, the `Vagrant documentation <https://www.vagrantup.com/docs/synced-folders/>`_ covers synced folder permissions.
* for WSL, the `WSL docs <https://docs.microsoft.com/en-us/windows/wsl/wsl-config#set-wsl-launch-settings>`_
and this `Microsoft blog post <https://blogs.msdn.microsoft.com/commandline/2018/01/12/chmod-chown-wsl-improvements/>`_ cover mount options.
If you absolutely depend on storing your Ansible config in a world-writable current
working directory, you can explicitly specify the config file via the
:envvar:`ANSIBLE_CONFIG` environment variable. Please take
appropriate steps to mitigate the security concerns above before doing so.
Relative paths for configuration
--------------------------------
You can specify a relative path for many configuration options. In most of
those cases the path used will be relative to the ``ansible.cfg`` file used
for the current execution. If you need a path relative to your current working
directory (CWD) you can use the ``{%raw%}{{CWD}}{%endraw%}`` macro to specify
it. We do not recommend this approach, as using your CWD as the root of
relative paths can be a security risk. For example:
``cd /tmp; secureinfo=./newrootpassword ansible-playbook ~/safestuff/change_root_pwd.yml``.
Common Options
==============
This is a copy of the options available from our release, your local install might have extra options due to additional plugins,
you can use the command line utility mentioned above (`ansible-config`) to browse through those.
{% if config_options %}
{% for config_option in config_options|sort %}
{% set config_len = config_option|length -%}
{% set config = config_options[config_option] %}
.. _{{config_option}}:
{{config_option}}
{{ '-' * config_len }}
{% if config['description'] and config['description'] != [''] %}
{% if config['description'] != ['TODO: write it'] %}
:Description: {{' '.join(config['description'])}}
{% endif %}
{% endif %}
{% if config['type'] %}
:Type: {{config['type']}}
{% endif %}
{% if 'default' in config %}
:Default: ``{{config['default']}}``
{% endif %}
{% if config.get('choices', False) %}
:Choices:
{% if config['choices'] is mapping %}
{% for key in config['choices'].keys() %}
- :{{key}}: {{ config['choices'][key] }}
{% endfor %}
{% else %}
{% for key in config['choices'] %}
- {{key}}
{% endfor %}
{% endif %}
{% endif %}
{% if config['version_added'] %}
:Version Added: {{config['version_added']}}
{% endif %}
{% if config.get('ini', False) %}
:Ini:
{% for ini_map in config['ini']|sort(attribute='section') %}
{% if config['ini']|length > 1 %}- {% endif %}:Section: [{{ini_map['section']}}]
{% if config['ini']|length > 1 %} {% endif %}:Key: {{ini_map['key']}}
{% if ini_map['version_added'] %}
:Version Added: {{ini_map['version_added']}}
{% endif %}
{% if ini_map['deprecated'] %}
:Deprecated in: {{ini_map['deprecated']['version']}}
:Deprecated detail: {{ini_map['deprecated']['why']}}
{% if ini_map['deprecated']['alternatives'] %}
:Deprecated alternatives: {{ini_map['deprecated']['alternatives']}}
{% endif %}
{% endif %}
{% endfor %}
{% endif %}
{% if config.get('env', False) %}
:Environment:
{% for env_var_map in config['env']|sort(attribute='name') %}
{% if config['env']|length > 1 %}- {% endif %}:Variable: :envvar:`{{env_var_map['name']}}`
{% if env_var_map['version_added'] %}
:Version Added: {{env_var_map['version_added']}}
{% endif %}
{% if env_var_map['deprecated'] %}
:Deprecated in: {{env_var_map['deprecated']['version']}}
:Deprecated detail: {{env_var_map['deprecated']['why']}}
{% if env_var_map['deprecated']['alternatives'] %}
:Deprecated alternatives: {{env_var_map['deprecated']['alternatives']}}
{% endif %}
{% endif %}
{% endfor %}
{% endif %}
{% if config.get('vars', False) %}
:Variables:
{% for a_var in config['vars']|sort(attribute='name') %}
{% if config['vars']|length > 1 %}- {%endif%}:name: `{{a_var['name']}}`
{% if a_var['version_added'] %}
:Version Added: {{a_var['version_added']}}
{% endif %}
{% if a_var['deprecated'] %}
:Deprecated in: {{a_var['deprecated']['version']}}
:Deprecated detail: {{a_Var['deprecated']['why']}}
{% if a_var['deprecated']['alternatives'] %}
:Deprecated alternatives: {{a_var['deprecated']['alternatives']}}
{% endif %}
{% endif %}
{% endfor %}
{% endif %}
{% if config['deprecated'] %}
:Deprecated in: {{config['deprecated']['version']}}
:Deprecated detail: {{config['deprecated']['why']}}
{% if config['deprecated']['alternatives'] %}
:Deprecated alternatives: {{config['deprecated']['alternatives']}}
{% endif %}
{% endif %}
{% endfor %}
Environment Variables
=====================
.. envvar:: ANSIBLE_CONFIG
Override the default ansible config file
{% for config_option in config_options %}
{% for env_var_map in config_options[config_option]['env'] %}
.. envvar:: {{env_var_map['name']}}
{% if config_options[config_option]['description'] and config_options[config_option]['description'] != [''] %}
{% if config_options[config_option]['description'] != ['TODO: write it'] %}
{{ ''.join(config_options[config_option]['description']) }}
{% endif %}
{% endif %}
See also :ref:`{{config_option}} <{{config_option}}>`
{% if env_var_map['version_added'] %}
:Version Added: {{env_var_map['version_added']}}
{% endif %}
{% if env_var_map['deprecated'] %}
:Deprecated in: {{env_var_map['deprecated']['version']}}
:Deprecated detail: {{env_var_map['deprecated']['why']}}
{% if env_var_map['deprecated']['alternatives'] %}
:Deprecated alternatives: {{env_var_map['deprecated']['alternatives']}}
{% endif %}
{% endif %}
{% endfor %}
{% endfor %}
{% endif %}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 74,580 |
Support empty string defaults for configuration settings
|
### Summary
As of https://github.com/ansible/ansible/pull/74523, using empty string defaults for config options in `lib/ansible/config/base.yml` causes the error: `WARNING: Inline literal start-string without end-string.`
The two configuration settings defaulting to empty strings were replaced with ~ (YAML equivalent to Python's None) for the time being and in this case it made no functional difference. Since the configuration settings are used both for documentation and in the code (and None != ''), empty string defaults should still be valid.
On a semi-related note, the file contains `default: null`, `default: ~`, ~`default:`~ (removed these in #74607), and settings that omit the default field, all of which are functionally equivalent but not displayed the same way.
### Issue Type
Documentation Report
### Component Name
docs/templates/config.rst.j2
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0.dev0]
```
### Configuration
```console
$ ansible-config dump --only-changed
N/A
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/74580
|
https://github.com/ansible/ansible/pull/77733
|
e208fe59329a45966d23f28bd92c0ee5592ac71b
|
eecc4046e879ab4867a973c21b92de3f629eb49d
| 2021-05-05T17:24:41Z |
python
| 2022-10-07T19:27:17Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
EDITOR:
name: editor application touse
default: vi
descrioption:
- for the cases in which Ansible needs to return a file within an editor, this chooses the application to use
ini:
- section: defaults
key: editor
version_added: '2.15'
env:
- name: ANSIBLE_EDITOR
version_added: '2.15'
- name: EDITOR
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
VAULT_ENCRYPT_SALT:
name: Vault salt to use for encryption
default: ~
description: 'The salt to use for the vault encryption. If it is not provided, a random salt will be used.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_SALT}]
ini:
- {key: vault_encrypt_salt, section: defaults}
version_added: '2.15'
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PAGER:
name: pager application to use
default: less
descrioption:
- for the cases in which Ansible needs to return output in pageable fasion, this chooses the application to use
ini:
- section: defaults
key: pager
version_added: '2.15'
env:
- name: ANSIBLE_PAGER
version_added: '2.15'
- name: PAGER
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,031 |
Docs: Add code-block wrappers to code examples in implicit_localhost.rst
|
### Summary
*Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `implicit_localhost.rst ` file in the `docs/docsite/rst/inventory/` directory, there are 2 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_collections_distributing.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/inventory/implicit_localhost.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79031
|
https://github.com/ansible/ansible/pull/79086
|
ad5d9843d651dd35287e2ad1ed0f57439e864e7e
|
8ecfb7c6d8384365cf34c893f6e6faad421f3bc3
| 2022-10-05T11:05:55Z |
python
| 2022-10-10T04:03:03Z |
docs/docsite/rst/inventory/implicit_localhost.rst
|
:orphan:
.. _implicit_localhost:
Implicit 'localhost'
====================
When you try to reference a ``localhost`` and you don't have it defined in inventory, Ansible will create an implicit one for you.::
- hosts: all
tasks:
- name: check that i have log file for all hosts on my local machine
stat: path=/var/log/hosts/{{inventory_hostname}}.log
delegate_to: localhost
In a case like this (or ``local_action``) when Ansible needs to contact a 'localhost' but you did not supply one, we create one for you. This host is defined with specific connection variables equivalent to this in an inventory::
...
hosts:
localhost:
vars:
ansible_connection: local
ansible_python_interpreter: "{{ansible_playbook_python}}"
This ensures that the proper connection and Python are used to execute your tasks locally.
You can override the built-in implicit version by creating a ``localhost`` host entry in your inventory. At that point, all implicit behaviors are ignored; the ``localhost`` in inventory is treated just like any other host. Group and host vars will apply, including connection vars, which includes the ``ansible_python_interpreter`` setting. This will also affect ``delegate_to: localhost`` and ``local_action``, the latter being an alias to the former.
.. note::
- This host is not targetable via any group, however it will use vars from ``host_vars`` and from the 'all' group.
- Implicit localhost does not appear in the ``hostvars`` magic variable unless demanded, such as by ``"{{ hostvars['localhost'] }}"``.
- The ``inventory_file`` and ``inventory_dir`` magic variables are not available for the implicit localhost as they are dependent on **each inventory host**.
- This implicit host also gets triggered by using ``127.0.0.1`` or ``::1`` as they are the IPv4 and IPv6 representations of 'localhost'.
- Even though there are many ways to create it, there will only ever be ONE implicit localhost, using the name first used to create it.
- Having ``connection: local`` does NOT trigger an implicit localhost, you are just changing the connection for the ``inventory_hostname``.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,029 |
Docs: Add code-block wrappers to code examples in playbooks_filters.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `./playbooks_filters.rst` file in the Playbook Guide (`docs/docsite/rst/playbook_guide `), there is one instance where a lead-in sentence ends with `::`. Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_collections_distributing.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/playbook_guide/playbooks_filters.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79029
|
https://github.com/ansible/ansible/pull/79093
|
a48c4422759605a95e74d0a7e8456ad35f72caa8
|
dfef3260a52a0d1038ed0c6840a5d40a4dbfeeb3
| 2022-10-05T10:46:33Z |
python
| 2022-10-10T21:59:23Z |
docs/docsite/rst/playbook_guide/playbooks_filters.rst
|
.. _playbooks_filters:
********************************
Using filters to manipulate data
********************************
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:
.. code-block:: yaml+jinja
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:
.. code-block:: yaml+jinja
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:
.. code-block:: yaml+jinja
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:
.. code-block:: yaml+jinja
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
A convenient way of requiring a variable to be overridden is to give it an undefined value using the ``undef`` keyword. This can be useful in a role's defaults.
.. code-block:: yaml+jinja
galaxy_url: "https://galaxy.ansible.com"
galaxy_api_key: {{ undef(hint="You must specify your Galaxy API key") }}
Defining different values for true/false/null (ternary)
=======================================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):
.. code-block:: yaml+jinja
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):
.. code-block:: yaml+jinja
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Managing data types
===================
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:
.. code-block:: yaml+jinja
{{ myvar | type_debug }}
You should note that, while this may seem like a useful filter for checking that you have the right type of data in a variable, you should often prefer :ref:`type tests <type_tests>`, which will allow you to test for specific data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:
.. code-block:: yaml+jinja
{{ dict | dict2items }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
tags:
Application: payment
Environment: dev
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:
.. code-block:: yaml+jinja
{{ files | dict2items(key_name='file', value_name='path') }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
files:
users: /etc/passwd
groups: /etc/group
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:
.. code-block:: yaml+jinja
{{ tags | items2dict }}
List data (before applying the ``items2dict`` filter):
.. code-block:: yaml
tags:
- key: Application
value: payment
- key: Environment
value: dev
Dictionary data (after applying the ``items2dict`` filter):
.. code-block:: text
Application: payment
Environment: dev
The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:
.. code-block:: yaml
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:
.. code-block:: yaml+jinja
{{ tags | items2dict(key_name='fruit', value_name='color') }}
If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:
.. code-block:: yaml
- ansible.builtin.debug:
msg: test
when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:
.. code-block:: yaml
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Formatting data: YAML and JSON
==============================
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:
.. code-block:: yaml+jinja
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:
.. code-block:: yaml+jinja
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_ for ``dump()``.
If you are reading in some already formatted data:
.. code-block:: yaml+jinja
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
Filter `to_json` and Unicode support
------------------------------------
By default `to_json` and `to_nice_json` will convert data received to ASCII, so:
.. code-block:: yaml+jinja
{{ 'München'| to_json }}
will return:
.. code-block:: text
'M\u00fcnchen'
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:
.. code-block:: yaml+jinja
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
.. _zip_filter_example:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``:
.. code-block:: yaml+jinja
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]]
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"]]
To always exhaust all lists use ``zip_longest``:
.. code-block:: yaml+jinja
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
# => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]]
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:
.. code-block:: yaml+jinja
{{ dict(keys_list | zip(values_list)) }}
List data (before applying the ``zip`` filter):
.. code-block:: yaml
keys_list:
- one
- two
values_list:
- apple
- orange
Dictionary data (after applying the ``zip`` filter):
.. code-block:: yaml
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:
.. code-block:: yaml+jinja
{{ users | subelements('groups', skip_missing=True) }}
Data before applying the ``subelements`` filter:
.. code-block:: yaml
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Data after applying the ``subelements`` filter:
.. code-block:: yaml
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:
.. code-block:: yaml+jinja
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:
.. code-block:: yaml+jinja
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be:
.. code-block:: text
{'a':1, 'b':3}
The filter can also take multiple arguments to merge:
.. code-block:: yaml+jinja
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged:
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys:
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True) }}
This would result in:
.. code-block:: yaml
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:
.. code-block:: yaml
default:
a:
- default
patch:
a:
- patch
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='keep') }}
This would result in:
.. code-block:: yaml
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append') }}
This would result in:
.. code-block:: yaml
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend') }}
This would result in:
.. code-block:: yaml
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:
.. code-block:: yaml
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in:
.. code-block:: yaml
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together:
.. code-block:: yaml
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):
.. code-block:: yaml+jinja
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be:
.. code-block:: none
['x', 'z']
[42, 31]
The filter can take another argument:
.. code-block:: yaml+jinja
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive lookup inside the container:
.. code-block:: yaml+jinja
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list:
.. code-block:: yaml+jinja
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size:
.. code-block:: yaml+jinja
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example:
.. code-block:: yaml+jinja
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in:
.. code-block:: json
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
.. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <https://jmespath.org/examples.html>`_.
Consider this data structure:
.. code-block:: json
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query:
.. code-block:: yaml+jinja
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names:
.. code-block:: yaml+jinja
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
.. code-block:: yaml+jinja
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with '52:54:00':
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
.. _random_filter_example:
Random items or numbers
-----------------------
The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list:
.. code-block:: yaml+jinja
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 (inclusive) and a specified integer (exclusive):
.. code-block:: yaml+jinja
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
To get a random number from 0 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(step=10) }}
# => 70
To get a random number from 1 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
You can initialize the random number generator from a seed to create random-but-idempotent numbers:
.. code-block:: yaml+jinja
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
Managing list variables
=======================
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers:
.. code-block:: yaml+jinja
{{ list1 | min }}
.. versionadded:: 2.11
To get the minimum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
To get the maximum value from a list of numbers:
.. code-block:: yaml+jinja
{{ [3, 4, 2] | max }}
.. versionadded:: 2.11
To get the maximum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does):
.. code-block:: yaml+jinja
{{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
Flatten only the first level of a list (akin to the `items` lookup):
.. code-block:: yaml+jinja
{{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
.. versionadded:: 2.11
Preserve nulls in a list, by default flatten removes them. :
.. code-block:: yaml+jinja
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
# => [3, None, 4, [2]]
.. _set_theory_filters:
Selecting from sets or lists (set theory)
=========================================
You can select or combine items from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
To get a union of two lists:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
To get the intersection of 2 lists (unique list of all items in both):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
To get the difference of 2 lists (items in 1 that don't exist in 2):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
To get the symmetric difference of 2 lists (items exclusive to each list):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
.. _math_stuff:
Calculating numbers (math)
==========================
.. versionadded:: 1.9
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e):
.. code-block:: yaml+jinja
{{ 8 | log }}
# => 2.0794415416798357
Get the base 10 logarithm:
.. code-block:: yaml+jinja
{{ 8 | log(10) }}
# => 0.9030899869919435
Give me the power of 2! (or 5):
.. code-block:: yaml+jinja
{{ 8 | pow(5) }}
# => 32768.0
Square root, or the 5th:
.. code-block:: yaml+jinja
{{ 8 | root }}
# => 2.8284271247461903
{{ 8 | root(5) }}
# => 1.5157165665103982
Managing network interactions
=============================
These filters help you with common network tasks.
.. note::
These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipaddr }}
You can also require a specific IP protocol version:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use:
.. code-block:: yaml+jinja
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
# => 192.0.2.1
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter:
.. code-block:: yaml+jinja
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:
.. code-block:: none
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note::
For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list:
.. code-block:: yaml+jinja
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
This example renders the following sorted list:
.. code-block:: text
['100,1688,3002-3005,3999']
Another example Jinja template:
.. code-block:: yaml+jinja
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Hashing and encrypting strings and passwords
==============================================
.. versionadded:: 1.9
To get the sha1 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('sha1') }}
# => "b444ac06613fc8d63795be9ad0beaf55011936ac"
To get the md5 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('md5') }}
# => "5a105e8b9d40e1329780d62ea2265d8a"
Get a string checksum:
.. code-block:: yaml+jinja
{{ 'test2' | checksum }}
# => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f"
Other hashes (platform dependent):
.. code-block:: yaml+jinja
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt):
.. code-block:: yaml+jinja
{{ 'passwordsaresecret' | password_hash('sha512') }}
# => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/"
To get a sha256 password hash with a specific salt:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
# => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4"
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
# => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0"
Hash types available depend on the control system running Ansible, 'hash' depends on `hashlib <https://docs.python.org/3.8/library/hashlib.html>`_, password_hash depends on `passlib <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html>`_. The `crypt <https://docs.python.org/3.8/library/crypt.html>`_ is used as a fallback if ``passlib`` is not installed.
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
# => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7"
The filter `password_hash` produces different results depending on whether you installed `passlib` or not.
To ensure idempotency, specify `rounds` to be neither `crypt`'s nor `passlib`'s default, which is `5000` for `crypt` and a variable value (`535000` for sha256, `656000` for sha512) for `passlib`:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=5001) }}
# => "$5$rounds=5001$mysecretsalt$wXcTWWXbfcR8er5IVf7NuquLvnUA6s8/qdtOhAZ.xN."
Hash type 'blowfish' (BCrypt) provides the facility to specify the version of the BCrypt algorithm.
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('blowfish', '1234567890123456789012', ident='2b') }}
# => "$2b$12$123456789012345678901uuJ4qFdej6xnWjOQT.FStqfdoY8dYUPC"
.. note::
The parameter is only available for `blowfish (BCrypt) <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt>`_.
Other hash types will simply ignore this parameter.
Valid values for this parameter are: ['2', '2a', '2y', '2b']
.. versionadded:: 2.12
You can also use the Ansible :ref:`vault <vault>` filter to encrypt data:
.. code-block:: yaml+jinja
# simply encrypt my key in a vault
vars:
myvaultedkey: "{{ keyrawdata|vault(passphrase) }}"
- name: save templated vaulted data
template: src=dump_template_data.j2 dest=/some/key/vault.txt
vars:
mysalt: '{{ 2**256|random(seed=inventory_hostname) }}'
template_data: '{{ secretdata|vault(vaultsecret, salt=mysalt) }}'
And then decrypt it using the unvault filter:
.. code-block:: yaml+jinja
# simply decrypt my key from a vault
vars:
mykey: "{{ myvaultedkey|unvault(passphrase) }}"
- name: save templated unvaulted data
template: src=dump_template_data.j2 dest=/some/key/clear.txt
vars:
template_data: '{{ secretdata|unvault(vaultsecret) }}'
.. _other_useful_filters:
Manipulating text
=================
Several filters work with text, including URLs, file names, and path names.
.. _comment_filter:
Adding comments to files
------------------------
The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:
.. code-block:: yaml+jinja
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):
.. code-block:: yaml+jinja
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter:
.. code-block:: yaml+jinja
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style:
.. code-block:: yaml+jinja
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: ini
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter:
.. code-block:: yaml+jinja
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
URLEncode Variables
-------------------
The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:
.. code-block:: yaml+jinja
{{ 'Trollhättan' | urlencode }}
# => 'Trollh%C3%A4ttan'
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:
.. code-block:: yaml+jinja
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:
.. code-block:: yaml+jinja
# Extracts the database name from a string
{{ 'server1/database42' | regex_search('database[0-9]+') }}
# => 'database42'
# Example for a case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }}
# => 'BAR'
# Extracts server and database id from a string
{{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }}
# => ['1', '42']
# Extracts dividend and divisor from a division
{{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }}
# => ['21', '42']
The ``regex_search`` filter returns an empty string if it cannot find a match:
.. code-block:: yaml+jinja
{{ 'ansible' | regex_search('foobar') }}
# => ''
.. note::
The ``regex_search`` filter returns ``None`` when used in a Jinja expression (for example in conjunction with operators, other filters, and so on). See the two examples below.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') == '' }}
# => False
{{ 'ansible' | regex_search('foobar') is none }}
# => True
This is due to historic behavior and the custom re-implementation of some of the Jinja internals in Ansible. Enable the ``jinja2_native`` setting if you want the ``regex_search`` filter to always return ``None`` if it cannot find a match. See :ref:`jinja2_faqs` for details.
To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:
.. code-block:: yaml+jinja
# Returns a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
# => ['8.8.8.8', '8.8.4.4']
# Returns all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }}
# => ['CAR', 'tar', 'bar']
To replace text in a string with regex, use the ``regex_replace`` filter:
.. code-block:: yaml+jinja
# Convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# => 'able'
# Convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# => 'bar'
# Convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# => 'localhost, 80'
# Convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# => 'localhost'
# Comment all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }}
# => '#CAR\n#tar\nfoo\n#bar\n'
.. note::
If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:
.. code-block:: yaml+jinja
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note::
Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Managing file names and path names
----------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':
.. code-block:: yaml+jinja
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_splitdrive }}
To get only the windows drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | last }}
To get the directory from a path:
.. code-block:: yaml+jinja
{{ path | dirname }}
To get the directory from a windows path (new version 2.0):
.. code-block:: yaml+jinja
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5):
.. code-block:: yaml+jinja
{{ path | expanduser }}
To expand a path containing environment variables:
.. code-block:: yaml+jinja
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8):
.. code-block:: yaml+jinja
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7):
.. code-block:: yaml+jinja
{{ path | relpath('/etc') }}
To get the root and extension of a path or file name (new in version 2.0):
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be '.conf'
{{ path | splitext | last }}
To join one or more path components:
.. code-block:: yaml+jinja
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
Manipulating strings
====================
To add quotes for shell usage:
.. code-block:: yaml+jinja
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
To concatenate a list into a string:
.. code-block:: yaml+jinja
{{ list | join(" ") }}
To split a string into a list:
.. code-block:: yaml+jinja
{{ csv_string | split(",") }}
.. versionadded:: 2.11
To work with Base64 encoded strings:
.. code-block:: yaml+jinja
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:
.. code-block:: yaml+jinja
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
.. versionadded:: 2.6
Managing UUIDs
==============
To create a namespaced UUIDv5:
.. code-block:: yaml+jinja
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':
.. code-block:: yaml+jinja
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:jinja-filters.map>`:
.. code-block:: yaml+jinja
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Handling dates and times
========================
To get a date object from a string use the `to_datetime` filter:
.. code-block:: yaml+jinja
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see the `python datetime documentation <https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior>`_.
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter:
.. code-block:: yaml+jinja
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. versionadded:: 2.13
strftime takes an optional utc argument, defaulting to False, meaning times are in the local timezone::
{{ '%H:%M:%S' | strftime }} # time now in local timezone
{{ '%H:%M:%S' | strftime(utc=True) }} # time now in UTC
.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
Getting Kubernetes resource names
=================================
.. note::
These filters have migrated to the `kubernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection.
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash:
.. code-block:: yaml+jinja
{{ configmap_resource_definition | kubernetes.core.k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications:
.. code-block:: yaml+jinja
my_secret:
kind: Secret
metadata:
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | kubernetes.core.k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`tips_and_tricks`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,793 |
Add option to disable DNS lookups in iptables module
|
### Summary
When modifying firewall rules with `iptables` module in ansible, the iptables tries to do reverse lookups for the IP addresses in some occasions, which can cause huge delays when the defined DNS server is unreachable for some reason. When working with iptables from the command line, one can just use the `-n` -switch (`--numeric`) of the iptables binary to disable DNS lookups altogether. However, when using the `ansible.builtin.iptables` module, such switch can not be set.
The delay problem of iptables has been known issue for long, but usually it can be bypassed with the trick mentioned above.
The` --numeric` only is available for the list (`-L`) command, and even though this module does not directly offer such functionality, internally it is using it for checking policies etc.
### Issue Type
Feature Idea
### Component Name
iptables
### Additional Information
For the sake of clarity: the problem is not about using hostnames for firewall rules (which is generally considered bad idea anyway), but the fact that iptables does the reverse lookup of the IP addresses too.
As for the question: why don't you just setup a working DNS?
- Some environments might not have DNS at all and still need firewall
- Not all machines need DNS, so no DNS for them
- DNS might not be available during the deployment when setting the initial firewall rules but will be around later on
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78793
|
https://github.com/ansible/ansible/pull/78828
|
5d253a13807e884b7ce0b6b57a963a45e2f0322c
|
cc2e7501db65193b7103195251dae5cffd8c03ca
| 2022-09-16T07:50:54Z |
python
| 2022-10-11T20:59:35Z |
changelogs/fragments/78828-iptables-option-to-disable-dns-lookups.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,793 |
Add option to disable DNS lookups in iptables module
|
### Summary
When modifying firewall rules with `iptables` module in ansible, the iptables tries to do reverse lookups for the IP addresses in some occasions, which can cause huge delays when the defined DNS server is unreachable for some reason. When working with iptables from the command line, one can just use the `-n` -switch (`--numeric`) of the iptables binary to disable DNS lookups altogether. However, when using the `ansible.builtin.iptables` module, such switch can not be set.
The delay problem of iptables has been known issue for long, but usually it can be bypassed with the trick mentioned above.
The` --numeric` only is available for the list (`-L`) command, and even though this module does not directly offer such functionality, internally it is using it for checking policies etc.
### Issue Type
Feature Idea
### Component Name
iptables
### Additional Information
For the sake of clarity: the problem is not about using hostnames for firewall rules (which is generally considered bad idea anyway), but the fact that iptables does the reverse lookup of the IP addresses too.
As for the question: why don't you just setup a working DNS?
- Some environments might not have DNS at all and still need firewall
- Not all machines need DNS, so no DNS for them
- DNS might not be available during the deployment when setting the initial firewall rules but will be around later on
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78793
|
https://github.com/ansible/ansible/pull/78828
|
5d253a13807e884b7ce0b6b57a963a45e2f0322c
|
cc2e7501db65193b7103195251dae5cffd8c03ca
| 2022-09-16T07:50:54Z |
python
| 2022-10-11T20:59:35Z |
lib/ansible/modules/iptables.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2015, Linus Unnebäck <[email protected]>
# Copyright: (c) 2017, Sébastien DA ROCHA <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: iptables
short_description: Modify iptables rules
version_added: "2.0"
author:
- Linus Unnebäck (@LinusU) <[email protected]>
- Sébastien DA ROCHA (@sebastiendarocha)
description:
- C(iptables) is used to set up, maintain, and inspect the tables of IP packet
filter rules in the Linux kernel.
- This module does not handle the saving and/or loading of rules, but rather
only manipulates the current rules that are present in memory. This is the
same as the behaviour of the C(iptables) and C(ip6tables) command which
this module uses internally.
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: linux
notes:
- This module just deals with individual rules. If you need advanced
chaining of rules the recommended way is to template the iptables restore
file.
options:
table:
description:
- This option specifies the packet matching table which the command should operate on.
- If the kernel is configured with automatic module loading, an attempt will be made
to load the appropriate module for that table if it is not already there.
type: str
choices: [ filter, nat, mangle, raw, security ]
default: filter
state:
description:
- Whether the rule should be absent or present.
type: str
choices: [ absent, present ]
default: present
action:
description:
- Whether the rule should be appended at the bottom or inserted at the top.
- If the rule already exists the chain will not be modified.
type: str
choices: [ append, insert ]
default: append
version_added: "2.2"
rule_num:
description:
- Insert the rule as the given rule number.
- This works only with C(action=insert).
type: str
version_added: "2.5"
ip_version:
description:
- Which version of the IP protocol this rule should apply to.
type: str
choices: [ ipv4, ipv6 ]
default: ipv4
chain:
description:
- Specify the iptables chain to modify.
- This could be a user-defined chain or one of the standard iptables chains, like
C(INPUT), C(FORWARD), C(OUTPUT), C(PREROUTING), C(POSTROUTING), C(SECMARK) or C(CONNSECMARK).
type: str
protocol:
description:
- The protocol of the rule or of the packet to check.
- The specified protocol can be one of C(tcp), C(udp), C(udplite), C(icmp), C(ipv6-icmp) or C(icmpv6),
C(esp), C(ah), C(sctp) or the special keyword C(all), or it can be a numeric value,
representing one of these protocols or a different one.
- A protocol name from I(/etc/protocols) is also allowed.
- A C(!) argument before the protocol inverts the test.
- The number zero is equivalent to all.
- C(all) will match with all protocols and is taken as default when this option is omitted.
type: str
source:
description:
- Source specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
destination:
description:
- Destination specification.
- Address can be either a network name, a hostname, a network IP address
(with /mask), or a plain IP address.
- Hostnames will be resolved once only, before the rule is submitted to
the kernel. Please note that specifying any name to be resolved with
a remote query such as DNS is a really bad idea.
- The mask can be either a network mask or a plain number, specifying
the number of 1's at the left side of the network mask. Thus, a mask
of 24 is equivalent to 255.255.255.0. A C(!) argument before the
address specification inverts the sense of the address.
type: str
tcp_flags:
description:
- TCP flags specification.
- C(tcp_flags) expects a dict with the two keys C(flags) and C(flags_set).
type: dict
default: {}
version_added: "2.4"
suboptions:
flags:
description:
- List of flags you want to examine.
type: list
elements: str
flags_set:
description:
- Flags to be set.
type: list
elements: str
match:
description:
- Specifies a match to use, that is, an extension module that tests for
a specific property.
- The set of matches make up the condition under which a target is invoked.
- Matches are evaluated first to last if specified as an array and work in short-circuit
fashion, i.e. if one extension yields false, evaluation will stop.
type: list
elements: str
default: []
jump:
description:
- This specifies the target of the rule; i.e., what to do if the packet matches it.
- The target can be a user-defined chain (other than the one
this rule is in), one of the special builtin targets which decide the
fate of the packet immediately, or an extension (see EXTENSIONS
below).
- If this option is omitted in a rule (and the goto parameter
is not used), then matching the rule will have no effect on the
packet's fate, but the counters on the rule will be incremented.
type: str
gateway:
description:
- This specifies the IP address of host to send the cloned packets.
- This option is only valid when C(jump) is set to C(TEE).
type: str
version_added: "2.8"
log_prefix:
description:
- Specifies a log text for the rule. Only make sense with a LOG jump.
type: str
version_added: "2.5"
log_level:
description:
- Logging level according to the syslogd-defined priorities.
- The value can be strings or numbers from 1-8.
- This parameter is only applicable if C(jump) is set to C(LOG).
type: str
version_added: "2.8"
choices: [ '0', '1', '2', '3', '4', '5', '6', '7', 'emerg', 'alert', 'crit', 'error', 'warning', 'notice', 'info', 'debug' ]
goto:
description:
- This specifies that the processing should continue in a user specified chain.
- Unlike the jump argument return will not continue processing in
this chain but instead in the chain that called us via jump.
type: str
in_interface:
description:
- Name of an interface via which a packet was received (only for packets
entering the C(INPUT), C(FORWARD) and C(PREROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins with
this name will match.
- If this option is omitted, any interface name will match.
type: str
out_interface:
description:
- Name of an interface via which a packet is going to be sent (for
packets entering the C(FORWARD), C(OUTPUT) and C(POSTROUTING) chains).
- When the C(!) argument is used before the interface name, the sense is inverted.
- If the interface name ends in a C(+), then any interface which begins
with this name will match.
- If this option is omitted, any interface name will match.
type: str
fragment:
description:
- This means that the rule only refers to second and further fragments
of fragmented packets.
- Since there is no way to tell the source or destination ports of such
a packet (or ICMP type), such a packet will not match any rules which specify them.
- When the "!" argument precedes fragment argument, the rule will only match head fragments,
or unfragmented packets.
type: str
set_counters:
description:
- This enables the administrator to initialize the packet and byte
counters of a rule (during C(INSERT), C(APPEND), C(REPLACE) operations).
type: str
source_port:
description:
- Source port or port range specification.
- This can either be a service name or a port number.
- An inclusive range can also be specified, using the format C(first:last).
- If the first port is omitted, C(0) is assumed; if the last is omitted, C(65535) is assumed.
- If the first port is greater than the second one they will be swapped.
type: str
destination_port:
description:
- "Destination port or port range specification. This can either be
a service name or a port number. An inclusive range can also be
specified, using the format first:last. If the first port is omitted,
'0' is assumed; if the last is omitted, '65535' is assumed. If the
first port is greater than the second one they will be swapped.
This is only valid if the rule also specifies one of the following
protocols: tcp, udp, dccp or sctp."
type: str
destination_ports:
description:
- This specifies multiple destination port numbers or port ranges to match in the multiport module.
- It can only be used in conjunction with the protocols tcp, udp, udplite, dccp and sctp.
type: list
elements: str
version_added: "2.11"
to_ports:
description:
- This specifies a destination port or range of ports to use, without
this, the destination port is never altered.
- This is only valid if the rule also specifies one of the protocol
C(tcp), C(udp), C(dccp) or C(sctp).
type: str
to_destination:
description:
- This specifies a destination address to use with C(DNAT).
- Without this, the destination address is never altered.
type: str
version_added: "2.1"
to_source:
description:
- This specifies a source address to use with C(SNAT).
- Without this, the source address is never altered.
type: str
version_added: "2.2"
syn:
description:
- This allows matching packets that have the SYN bit set and the ACK
and RST bits unset.
- When negated, this matches all packets with the RST or the ACK bits set.
type: str
choices: [ ignore, match, negate ]
default: ignore
version_added: "2.5"
set_dscp_mark:
description:
- This allows specifying a DSCP mark to be added to packets.
It takes either an integer or hex value.
- Mutually exclusive with C(set_dscp_mark_class).
type: str
version_added: "2.1"
set_dscp_mark_class:
description:
- This allows specifying a predefined DiffServ class which will be
translated to the corresponding DSCP mark.
- Mutually exclusive with C(set_dscp_mark).
type: str
version_added: "2.1"
comment:
description:
- This specifies a comment that will be added to the rule.
type: str
ctstate:
description:
- A list of the connection states to match in the conntrack module.
- Possible values are C(INVALID), C(NEW), C(ESTABLISHED), C(RELATED), C(UNTRACKED), C(SNAT), C(DNAT).
type: list
elements: str
default: []
src_range:
description:
- Specifies the source IP range to match in the iprange module.
type: str
version_added: "2.8"
dst_range:
description:
- Specifies the destination IP range to match in the iprange module.
type: str
version_added: "2.8"
match_set:
description:
- Specifies a set name which can be defined by ipset.
- Must be used together with the match_set_flags parameter.
- When the C(!) argument is prepended then it inverts the rule.
- Uses the iptables set extension.
type: str
version_added: "2.11"
match_set_flags:
description:
- Specifies the necessary flags for the match_set parameter.
- Must be used together with the match_set parameter.
- Uses the iptables set extension.
type: str
choices: [ "src", "dst", "src,dst", "dst,src" ]
version_added: "2.11"
limit:
description:
- Specifies the maximum average number of matches to allow per second.
- The number can specify units explicitly, using C(/second), C(/minute),
C(/hour) or C(/day), or parts of them (so C(5/second) is the same as
C(5/s)).
type: str
limit_burst:
description:
- Specifies the maximum burst before the above limit kicks in.
type: str
version_added: "2.1"
uid_owner:
description:
- Specifies the UID or username to use in match by owner rule.
- From Ansible 2.6 when the C(!) argument is prepended then the it inverts
the rule to apply instead to all users except that one specified.
type: str
version_added: "2.1"
gid_owner:
description:
- Specifies the GID or group to use in match by owner rule.
type: str
version_added: "2.9"
reject_with:
description:
- 'Specifies the error packet type to return while rejecting. It implies
"jump: REJECT".'
type: str
version_added: "2.1"
icmp_type:
description:
- This allows specification of the ICMP type, which can be a numeric
ICMP type, type/code pair, or one of the ICMP type names shown by the
command 'iptables -p icmp -h'
type: str
version_added: "2.2"
flush:
description:
- Flushes the specified table and chain of all rules.
- If no chain is specified then the entire table is purged.
- Ignores all other parameters.
type: bool
default: false
version_added: "2.2"
policy:
description:
- Set the policy for the chain to the given target.
- Only built-in chains can have policies.
- This parameter requires the C(chain) parameter.
- If you specify this parameter, all other parameters will be ignored.
- This parameter is used to set default policy for the given C(chain).
Do not confuse this with C(jump) parameter.
type: str
choices: [ ACCEPT, DROP, QUEUE, RETURN ]
version_added: "2.2"
wait:
description:
- Wait N seconds for the xtables lock to prevent multiple instances of
the program from running concurrently.
type: str
version_added: "2.10"
chain_management:
description:
- If C(true) and C(state) is C(present), the chain will be created if needed.
- If C(true) and C(state) is C(absent), the chain will be deleted if the only
other parameter passed are C(chain) and optionally C(table).
type: bool
default: false
version_added: "2.13"
'''
EXAMPLES = r'''
- name: Block specific IP
ansible.builtin.iptables:
chain: INPUT
source: 8.8.8.8
jump: DROP
become: yes
- name: Forward port 80 to 8600
ansible.builtin.iptables:
table: nat
chain: PREROUTING
in_interface: eth0
protocol: tcp
match: tcp
destination_port: 80
jump: REDIRECT
to_ports: 8600
comment: Redirect web traffic to port 8600
become: yes
- name: Allow related and established connections
ansible.builtin.iptables:
chain: INPUT
ctstate: ESTABLISHED,RELATED
jump: ACCEPT
become: yes
- name: Allow new incoming SYN packets on TCP port 22 (SSH)
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 22
ctstate: NEW
syn: match
jump: ACCEPT
comment: Accept new SSH connections.
- name: Match on IP ranges
ansible.builtin.iptables:
chain: FORWARD
src_range: 192.168.1.100-192.168.1.199
dst_range: 10.0.0.1-10.0.0.50
jump: ACCEPT
- name: Allow source IPs defined in ipset "admin_hosts" on port 22
ansible.builtin.iptables:
chain: INPUT
match_set: admin_hosts
match_set_flags: src
destination_port: 22
jump: ALLOW
- name: Tag all outbound tcp packets with DSCP mark 8
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark: 8
protocol: tcp
- name: Tag all outbound tcp packets with DSCP DiffServ class CS1
ansible.builtin.iptables:
chain: OUTPUT
jump: DSCP
table: mangle
set_dscp_mark_class: CS1
protocol: tcp
# Create the user-defined chain ALLOWLIST
- iptables:
chain: ALLOWLIST
chain_management: true
# Delete the user-defined chain ALLOWLIST
- iptables:
chain: ALLOWLIST
chain_management: true
state: absent
- name: Insert a rule on line 5
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_port: 8080
jump: ACCEPT
action: insert
rule_num: 5
# Think twice before running following task as this may lock target system
- name: Set the policy for the INPUT chain to DROP
ansible.builtin.iptables:
chain: INPUT
policy: DROP
- name: Reject tcp with tcp-reset
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
reject_with: tcp-reset
ip_version: ipv4
- name: Set tcp flags
ansible.builtin.iptables:
chain: OUTPUT
jump: DROP
protocol: tcp
tcp_flags:
flags: ALL
flags_set:
- ACK
- RST
- SYN
- FIN
- name: Iptables flush filter
ansible.builtin.iptables:
chain: "{{ item }}"
flush: yes
with_items: [ 'INPUT', 'FORWARD', 'OUTPUT' ]
- name: Iptables flush nat
ansible.builtin.iptables:
table: nat
chain: '{{ item }}'
flush: yes
with_items: [ 'INPUT', 'OUTPUT', 'PREROUTING', 'POSTROUTING' ]
- name: Log packets arriving into an user-defined chain
ansible.builtin.iptables:
chain: LOGGING
action: append
state: present
limit: 2/second
limit_burst: 20
log_prefix: "IPTABLES:INFO: "
log_level: info
- name: Allow connections on multiple ports
ansible.builtin.iptables:
chain: INPUT
protocol: tcp
destination_ports:
- "80"
- "443"
- "8081:8083"
jump: ACCEPT
'''
import re
from ansible.module_utils.compat.version import LooseVersion
from ansible.module_utils.basic import AnsibleModule
IPTABLES_WAIT_SUPPORT_ADDED = '1.4.20'
IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED = '1.6.0'
BINS = dict(
ipv4='iptables',
ipv6='ip6tables',
)
ICMP_TYPE_OPTIONS = dict(
ipv4='--icmp-type',
ipv6='--icmpv6-type',
)
def append_param(rule, param, flag, is_list):
if is_list:
for item in param:
append_param(rule, item, flag, False)
else:
if param is not None:
if param[0] == '!':
rule.extend(['!', flag, param[1:]])
else:
rule.extend([flag, param])
def append_tcp_flags(rule, param, flag):
if param:
if 'flags' in param and 'flags_set' in param:
rule.extend([flag, ','.join(param['flags']), ','.join(param['flags_set'])])
def append_match_flag(rule, param, flag, negatable):
if param == 'match':
rule.extend([flag])
elif negatable and param == 'negate':
rule.extend(['!', flag])
def append_csv(rule, param, flag):
if param:
rule.extend([flag, ','.join(param)])
def append_match(rule, param, match):
if param:
rule.extend(['-m', match])
def append_jump(rule, param, jump):
if param:
rule.extend(['-j', jump])
def append_wait(rule, param, flag):
if param:
rule.extend([flag, param])
def construct_rule(params):
rule = []
append_wait(rule, params['wait'], '-w')
append_param(rule, params['protocol'], '-p', False)
append_param(rule, params['source'], '-s', False)
append_param(rule, params['destination'], '-d', False)
append_param(rule, params['match'], '-m', True)
append_tcp_flags(rule, params['tcp_flags'], '--tcp-flags')
append_param(rule, params['jump'], '-j', False)
if params.get('jump') and params['jump'].lower() == 'tee':
append_param(rule, params['gateway'], '--gateway', False)
append_param(rule, params['log_prefix'], '--log-prefix', False)
append_param(rule, params['log_level'], '--log-level', False)
append_param(rule, params['to_destination'], '--to-destination', False)
append_match(rule, params['destination_ports'], 'multiport')
append_csv(rule, params['destination_ports'], '--dports')
append_param(rule, params['to_source'], '--to-source', False)
append_param(rule, params['goto'], '-g', False)
append_param(rule, params['in_interface'], '-i', False)
append_param(rule, params['out_interface'], '-o', False)
append_param(rule, params['fragment'], '-f', False)
append_param(rule, params['set_counters'], '-c', False)
append_param(rule, params['source_port'], '--source-port', False)
append_param(rule, params['destination_port'], '--destination-port', False)
append_param(rule, params['to_ports'], '--to-ports', False)
append_param(rule, params['set_dscp_mark'], '--set-dscp', False)
append_param(
rule,
params['set_dscp_mark_class'],
'--set-dscp-class',
False)
append_match_flag(rule, params['syn'], '--syn', True)
if 'conntrack' in params['match']:
append_csv(rule, params['ctstate'], '--ctstate')
elif 'state' in params['match']:
append_csv(rule, params['ctstate'], '--state')
elif params['ctstate']:
append_match(rule, params['ctstate'], 'conntrack')
append_csv(rule, params['ctstate'], '--ctstate')
if 'iprange' in params['match']:
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
elif params['src_range'] or params['dst_range']:
append_match(rule, params['src_range'] or params['dst_range'], 'iprange')
append_param(rule, params['src_range'], '--src-range', False)
append_param(rule, params['dst_range'], '--dst-range', False)
if 'set' in params['match']:
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
elif params['match_set']:
append_match(rule, params['match_set'], 'set')
append_param(rule, params['match_set'], '--match-set', False)
append_match_flag(rule, 'match', params['match_set_flags'], False)
append_match(rule, params['limit'] or params['limit_burst'], 'limit')
append_param(rule, params['limit'], '--limit', False)
append_param(rule, params['limit_burst'], '--limit-burst', False)
append_match(rule, params['uid_owner'], 'owner')
append_match_flag(rule, params['uid_owner'], '--uid-owner', True)
append_param(rule, params['uid_owner'], '--uid-owner', False)
append_match(rule, params['gid_owner'], 'owner')
append_match_flag(rule, params['gid_owner'], '--gid-owner', True)
append_param(rule, params['gid_owner'], '--gid-owner', False)
if params['jump'] is None:
append_jump(rule, params['reject_with'], 'REJECT')
append_param(rule, params['reject_with'], '--reject-with', False)
append_param(
rule,
params['icmp_type'],
ICMP_TYPE_OPTIONS[params['ip_version']],
False)
append_match(rule, params['comment'], 'comment')
append_param(rule, params['comment'], '--comment', False)
return rule
def push_arguments(iptables_path, action, params, make_rule=True):
cmd = [iptables_path]
cmd.extend(['-t', params['table']])
cmd.extend([action, params['chain']])
if action == '-I' and params['rule_num']:
cmd.extend([params['rule_num']])
if make_rule:
cmd.extend(construct_rule(params))
return cmd
def check_rule_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-C', params)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def append_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-A', params)
module.run_command(cmd, check_rc=True)
def insert_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-I', params)
module.run_command(cmd, check_rc=True)
def remove_rule(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-D', params)
module.run_command(cmd, check_rc=True)
def flush_table(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-F', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def set_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-P', params, make_rule=False)
cmd.append(params['policy'])
module.run_command(cmd, check_rc=True)
def get_chain_policy(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params, make_rule=False)
rc, out, _ = module.run_command(cmd, check_rc=True)
chain_header = out.split("\n")[0]
result = re.search(r'\(policy ([A-Z]+)\)', chain_header)
if result:
return result.group(1)
return None
def get_iptables_version(iptables_path, module):
cmd = [iptables_path, '--version']
rc, out, _ = module.run_command(cmd, check_rc=True)
return out.split('v')[1].rstrip('\n')
def create_chain(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-N', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def check_chain_present(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-L', params, make_rule=False)
rc, _, __ = module.run_command(cmd, check_rc=False)
return (rc == 0)
def delete_chain(iptables_path, module, params):
cmd = push_arguments(iptables_path, '-X', params, make_rule=False)
module.run_command(cmd, check_rc=True)
def main():
module = AnsibleModule(
supports_check_mode=True,
argument_spec=dict(
table=dict(type='str', default='filter', choices=['filter', 'nat', 'mangle', 'raw', 'security']),
state=dict(type='str', default='present', choices=['absent', 'present']),
action=dict(type='str', default='append', choices=['append', 'insert']),
ip_version=dict(type='str', default='ipv4', choices=['ipv4', 'ipv6']),
chain=dict(type='str'),
rule_num=dict(type='str'),
protocol=dict(type='str'),
wait=dict(type='str'),
source=dict(type='str'),
to_source=dict(type='str'),
destination=dict(type='str'),
to_destination=dict(type='str'),
match=dict(type='list', elements='str', default=[]),
tcp_flags=dict(type='dict',
options=dict(
flags=dict(type='list', elements='str'),
flags_set=dict(type='list', elements='str'))
),
jump=dict(type='str'),
gateway=dict(type='str'),
log_prefix=dict(type='str'),
log_level=dict(type='str',
choices=['0', '1', '2', '3', '4', '5', '6', '7',
'emerg', 'alert', 'crit', 'error',
'warning', 'notice', 'info', 'debug'],
default=None,
),
goto=dict(type='str'),
in_interface=dict(type='str'),
out_interface=dict(type='str'),
fragment=dict(type='str'),
set_counters=dict(type='str'),
source_port=dict(type='str'),
destination_port=dict(type='str'),
destination_ports=dict(type='list', elements='str', default=[]),
to_ports=dict(type='str'),
set_dscp_mark=dict(type='str'),
set_dscp_mark_class=dict(type='str'),
comment=dict(type='str'),
ctstate=dict(type='list', elements='str', default=[]),
src_range=dict(type='str'),
dst_range=dict(type='str'),
match_set=dict(type='str'),
match_set_flags=dict(type='str', choices=['src', 'dst', 'src,dst', 'dst,src']),
limit=dict(type='str'),
limit_burst=dict(type='str'),
uid_owner=dict(type='str'),
gid_owner=dict(type='str'),
reject_with=dict(type='str'),
icmp_type=dict(type='str'),
syn=dict(type='str', default='ignore', choices=['ignore', 'match', 'negate']),
flush=dict(type='bool', default=False),
policy=dict(type='str', choices=['ACCEPT', 'DROP', 'QUEUE', 'RETURN']),
chain_management=dict(type='bool', default=False),
),
mutually_exclusive=(
['set_dscp_mark', 'set_dscp_mark_class'],
['flush', 'policy'],
),
required_if=[
['jump', 'TEE', ['gateway']],
['jump', 'tee', ['gateway']],
]
)
args = dict(
changed=False,
failed=False,
ip_version=module.params['ip_version'],
table=module.params['table'],
chain=module.params['chain'],
flush=module.params['flush'],
rule=' '.join(construct_rule(module.params)),
state=module.params['state'],
chain_management=module.params['chain_management'],
)
ip_version = module.params['ip_version']
iptables_path = module.get_bin_path(BINS[ip_version], True)
# Check if chain option is required
if args['flush'] is False and args['chain'] is None:
module.fail_json(msg="Either chain or flush parameter must be specified.")
if module.params.get('log_prefix', None) or module.params.get('log_level', None):
if module.params['jump'] is None:
module.params['jump'] = 'LOG'
elif module.params['jump'] != 'LOG':
module.fail_json(msg="Logging options can only be used with the LOG jump target.")
# Check if wait option is supported
iptables_version = LooseVersion(get_iptables_version(iptables_path, module))
if iptables_version >= LooseVersion(IPTABLES_WAIT_SUPPORT_ADDED):
if iptables_version < LooseVersion(IPTABLES_WAIT_WITH_SECONDS_SUPPORT_ADDED):
module.params['wait'] = ''
else:
module.params['wait'] = None
# Flush the table
if args['flush'] is True:
args['changed'] = True
if not module.check_mode:
flush_table(iptables_path, module, module.params)
# Set the policy
elif module.params['policy']:
current_policy = get_chain_policy(iptables_path, module, module.params)
if not current_policy:
module.fail_json(msg='Can\'t detect current policy')
changed = current_policy != module.params['policy']
args['changed'] = changed
if changed and not module.check_mode:
set_chain_policy(iptables_path, module, module.params)
# Delete the chain if there is no rule in the arguments
elif (args['state'] == 'absent') and not args['rule']:
chain_is_present = check_chain_present(
iptables_path, module, module.params
)
args['changed'] = chain_is_present
if (chain_is_present and args['chain_management'] and not module.check_mode):
delete_chain(iptables_path, module, module.params)
else:
insert = (module.params['action'] == 'insert')
rule_is_present = check_rule_present(
iptables_path, module, module.params
)
chain_is_present = rule_is_present or check_chain_present(
iptables_path, module, module.params
)
should_be_present = (args['state'] == 'present')
# Check if target is up to date
args['changed'] = (rule_is_present != should_be_present)
if args['changed'] is False:
# Target is already up to date
module.exit_json(**args)
# Check only; don't modify
if not module.check_mode:
if should_be_present:
if not chain_is_present and args['chain_management']:
create_chain(iptables_path, module, module.params)
if insert:
insert_rule(iptables_path, module, module.params)
else:
append_rule(iptables_path, module, module.params)
else:
remove_rule(iptables_path, module, module.params)
module.exit_json(**args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
changelogs/fragments/79083-jinja2_native-preserve-quotes-in-strings.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
lib/ansible/template/native_helpers.py
|
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
from itertools import islice, chain
from types import GeneratorType
from ansible.module_utils._text import to_text
from ansible.module_utils.six import string_types
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
_JSON_MAP = {
"true": True,
"false": False,
"null": None,
}
class Json2Python(ast.NodeTransformer):
def visit_Name(self, node):
if node.id not in _JSON_MAP:
return node
return ast.Constant(value=_JSON_MAP[node.id])
def ansible_eval_concat(nodes):
"""Return a string of concatenated compiled nodes. Throw an undefined error
if any of the nodes is undefined.
If the result of concat appears to be a dictionary, list or bool,
try and convert it to such using literal_eval, the same mechanism as used
in jinja2_native.
Used in Templar.template() when jinja2_native=False and convert_data=True.
"""
head = list(islice(nodes, 2))
if not head:
return ''
if len(head) == 1:
out = head[0]
if isinstance(out, NativeJinjaText):
return out
out = to_text(out)
else:
if isinstance(nodes, GeneratorType):
nodes = chain(head, nodes)
out = ''.join([to_text(v) for v in nodes])
# if this looks like a dictionary, list or bool, convert it to such
if out.startswith(('{', '[')) or out in ('True', 'False'):
unsafe = hasattr(out, '__UNSAFE__')
try:
out = ast.literal_eval(
ast.fix_missing_locations(
Json2Python().visit(
ast.parse(out, mode='eval')
)
)
)
except (ValueError, SyntaxError, MemoryError):
pass
else:
if unsafe:
out = wrap_var(out)
return out
def ansible_concat(nodes):
"""Return a string of concatenated compiled nodes. Throw an undefined error
if any of the nodes is undefined. Other than that it is equivalent to
Jinja2's default concat function.
Used in Templar.template() when jinja2_native=False and convert_data=False.
"""
return ''.join([to_text(v) for v in nodes])
def ansible_native_concat(nodes):
"""Return a native Python type from the list of compiled nodes. If the
result is a single node, its value is returned. Otherwise, the nodes are
concatenated as strings. If the result can be parsed with
:func:`ast.literal_eval`, the parsed value is returned. Otherwise, the
string is returned.
https://github.com/pallets/jinja/blob/master/src/jinja2/nativetypes.py
"""
head = list(islice(nodes, 2))
if not head:
return None
if len(head) == 1:
out = head[0]
# TODO send unvaulted data to literal_eval?
if isinstance(out, AnsibleVaultEncryptedUnicode):
return out.data
if isinstance(out, NativeJinjaText):
# Sometimes (e.g. ``| string``) we need to mark variables
# in a special way so that they remain strings and are not
# passed into literal_eval.
# See:
# https://github.com/ansible/ansible/issues/70831
# https://github.com/pallets/jinja/issues/1200
# https://github.com/ansible/ansible/issues/70831#issuecomment-664190894
return out
# short-circuit literal_eval for anything other than strings
if not isinstance(out, string_types):
return out
else:
if isinstance(nodes, GeneratorType):
nodes = chain(head, nodes)
out = ''.join([to_text(v) for v in nodes])
try:
return ast.literal_eval(
# In Python 3.10+ ast.literal_eval removes leading spaces/tabs
# from the given string. For backwards compatibility we need to
# parse the string ourselves without removing leading spaces/tabs.
ast.parse(out, mode='eval')
)
except (ValueError, SyntaxError, MemoryError):
return out
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
test/integration/targets/jinja2_native_types/runme.sh
|
#!/usr/bin/env bash
set -eux
export ANSIBLE_JINJA2_NATIVE=1
ansible-playbook runtests.yml -v "$@"
ansible-playbook --vault-password-file test_vault_pass test_vault.yml -v "$@"
ansible-playbook test_hostvars.yml -v "$@"
ansible-playbook nested_undefined.yml -v "$@"
unset ANSIBLE_JINJA2_NATIVE
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
test/integration/targets/jinja2_native_types/test_casting.yml
|
- name: cast things to other things
set_fact:
int_to_str: "'{{ i_two }}'"
int_to_str2: "{{ i_two | string }}"
str_to_int: "{{ s_two|int }}"
dict_to_str: "'{{ dict_one }}'"
list_to_str: "'{{ list_one }}'"
int_to_bool: "{{ i_one|bool }}"
str_true_to_bool: "{{ s_true|bool }}"
str_false_to_bool: "{{ s_false|bool }}"
list_to_json_str: "{{ list_one | to_json }}"
list_to_yaml_str: "{{ list_one | to_yaml }}"
- assert:
that:
- 'int_to_str == "2"'
- 'int_to_str|type_debug in ["str", "unicode"]'
- 'int_to_str2 == "2"'
- 'int_to_str2|type_debug in ["NativeJinjaText"]'
- 'str_to_int == 2'
- 'str_to_int|type_debug == "int"'
- 'dict_to_str|type_debug in ["str", "unicode"]'
- 'list_to_str|type_debug in ["str", "unicode"]'
- 'int_to_bool is sameas true'
- 'int_to_bool|type_debug == "bool"'
- 'str_true_to_bool is sameas true'
- 'str_true_to_bool|type_debug == "bool"'
- 'str_false_to_bool is sameas false'
- 'str_false_to_bool|type_debug == "bool"'
- 'list_to_json_str|type_debug in ["NativeJinjaText"]'
- 'list_to_yaml_str|type_debug in ["NativeJinjaText"]'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
test/integration/targets/jinja2_native_types/test_concatentation.yml
|
- name: add two ints
set_fact:
integer_sum: "{{ i_one + i_two }}"
- assert:
that:
- 'integer_sum == 3'
- 'integer_sum|type_debug == "int"'
- name: add casted string and int
set_fact:
integer_sum2: "{{ s_one|int + i_two }}"
- assert:
that:
- 'integer_sum2 == 3'
- 'integer_sum2|type_debug == "int"'
- name: concatenate int and string
set_fact:
string_sum: "'{{ [i_one, s_two]|join('') }}'"
- assert:
that:
- 'string_sum == "12"'
- 'string_sum|type_debug in ["str", "unicode"]'
- name: add two lists
set_fact:
list_sum: "{{ list_one + list_two }}"
- assert:
that:
- 'list_sum == ["one", "two", "three", "four"]'
- 'list_sum|type_debug == "list"'
- name: add two lists, multi expression
set_fact:
list_sum_multi: "{{ list_one }} + {{ list_two }}"
- assert:
that:
- 'list_sum_multi|type_debug in ["str", "unicode"]'
- name: add two dicts
set_fact:
dict_sum: "{{ dict_one + dict_two }}"
ignore_errors: yes
- assert:
that:
- 'dict_sum is undefined'
- name: loop through list with strings
set_fact:
list_for_strings: "{% for x in list_one %}{{ x }}{% endfor %}"
- assert:
that:
- 'list_for_strings == "onetwo"'
- 'list_for_strings|type_debug in ["str", "unicode"]'
- name: loop through list with int
set_fact:
list_for_int: "{% for x in list_one_int %}{{ x }}{% endfor %}"
- assert:
that:
- 'list_for_int == 1'
- 'list_for_int|type_debug == "int"'
- name: loop through list with ints
set_fact:
list_for_ints: "{% for x in list_ints %}{{ x }}{% endfor %}"
- assert:
that:
- 'list_for_ints == 42'
- 'list_for_ints|type_debug == "int"'
- name: loop through list to create a new list
set_fact:
list_from_list: "[{% for x in list_ints %}{{ x }},{% endfor %}]"
- assert:
that:
- 'list_from_list == [4, 2]'
- 'list_from_list|type_debug == "list"'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,083 |
enabling jinja2 native leads to quotes nested in quotes being stripped
|
### Summary
Enabling jinja2 native leads to quotes nested in other quotes (single quotes in double quotes or vice versa) being stripped. This stripping is additionally done in a seemingly inconsistent and unpredictable way.
### Issue Type
Bug Report
### Component Name
jinja2
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = /home/yannik/projects/closed/ansible.cfg
configured module search path = ['/home/yannik/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.10/site-packages/ansible
ansible collection location = /home/yannik/projects/closed/vendor_collections
executable location = /usr/local/bin/ansible
python version = 3.10.7 (main, Sep 7 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(/home/yannik/projects/xxx/ansible.cfg) = True
```
### OS / Environment
fedora 36
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: all
tasks:
- copy:
content: |
{{ myvar }}
dest: out1
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
{{ myvar }}
dest: out2
vars:
myvar: '"hello"'
- copy:
content: |
{{ myvar }}
dest: out3
vars:
myvar: ' "hello"'
- copy:
content: |
a = {{ myvar }}
dest: out4
vars:
myvar: '"hello {{ ansible_host }}"'
- copy:
content: |
a = {{ myvar }}
dest: out5
vars:
myvar: '"hello"'
- copy:
content: |
a = {{ myvar }}
dest: out6
vars:
myvar: ' "hello"'
```
```
ansible-playbook -i localhost, test.yml
cat ~/out*
```
### Expected Results
This is the result with jinja2 native disabled, and which I would generally expect:
```
"hello localhost"
"hello"
"hello"
a = "hello localhost"
a = "hello"
a = "hello"
```
### Actual Results
```console
hello localhost
hello
"hello"
a = hello localhost
a = "hello"
a = "hello"
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79083
|
https://github.com/ansible/ansible/pull/79119
|
f9cb6796755cda8d221a4b5f6851f0e50a4af91e
|
d34b5786858f699ef36da6785464021889eaa672
| 2022-10-08T22:12:34Z |
python
| 2022-10-12T17:16:06Z |
test/integration/targets/jinja2_native_types/test_preserving_quotes.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,101 |
ansible.builtin.copy deletes /dev/null instead of copying when called as root
|
### Summary
When calling `ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b`, ansible complains that the _target_ file doesn't exist, and then, if the target file exists, removes the source `/dev/null` instead of copying it onto the target.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/elavarde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/elavarde/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
RHEL 8
### Steps to Reproduce
```
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
touch /tmp/xxx
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
```
### Expected Results
I expected the target file to be emptied by copy (similarly to what `cp /dev/null somefile` does). This works also if the _target_ file pre-exists and I call the command _without_ become.
### Actual Results
```console
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
localhost | FAILED! => {
"changed": false,
"msg": "path /tmp/xxx does not exist",
"path": "/tmp/xxx"
}
$ touch /tmp/xxx
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
[WARNING]: Error deleting remote temporary files (rc: 1, stderr: /bin/sh: /dev/null: Permission denied })
Process WorkerProcess-1:
localhost | CHANGED => {
"changed": true,
"checksum": null,
"dest": "/tmp/xxx",
"gid": 1000,
"group": "elavarde",
"md5sum": null,
"mode": "0664",
"owner": "elavarde",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 0,
"src": "/dev/null",
"state": "file",
"uid": 1000
}
Traceback (most recent call last):
File "/usr/lib64/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/site-packages/ansible/executor/process/worker.py", line 138, in run
sys.stdout = sys.stderr = open(os.devnull, 'w')
PermissionError: [Errno 13] Permission denied: '/dev/null'
$ ls -la /dev/null
ls: cannot access '/dev/null': No such file or directory
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79101
|
https://github.com/ansible/ansible/pull/79102
|
cb2e434dd2359a9fe1c00e75431f4abeff7381e8
|
f66016df0e22e1234417dc3538bea75299b4e9eb
| 2022-10-11T14:59:50Z |
python
| 2022-10-17T17:07:04Z |
changelogs/fragments/dont_move_non_files.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,101 |
ansible.builtin.copy deletes /dev/null instead of copying when called as root
|
### Summary
When calling `ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b`, ansible complains that the _target_ file doesn't exist, and then, if the target file exists, removes the source `/dev/null` instead of copying it onto the target.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/elavarde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/elavarde/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
RHEL 8
### Steps to Reproduce
```
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
touch /tmp/xxx
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
```
### Expected Results
I expected the target file to be emptied by copy (similarly to what `cp /dev/null somefile` does). This works also if the _target_ file pre-exists and I call the command _without_ become.
### Actual Results
```console
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
localhost | FAILED! => {
"changed": false,
"msg": "path /tmp/xxx does not exist",
"path": "/tmp/xxx"
}
$ touch /tmp/xxx
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
[WARNING]: Error deleting remote temporary files (rc: 1, stderr: /bin/sh: /dev/null: Permission denied })
Process WorkerProcess-1:
localhost | CHANGED => {
"changed": true,
"checksum": null,
"dest": "/tmp/xxx",
"gid": 1000,
"group": "elavarde",
"md5sum": null,
"mode": "0664",
"owner": "elavarde",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 0,
"src": "/dev/null",
"state": "file",
"uid": 1000
}
Traceback (most recent call last):
File "/usr/lib64/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/site-packages/ansible/executor/process/worker.py", line 138, in run
sys.stdout = sys.stderr = open(os.devnull, 'w')
PermissionError: [Errno 13] Permission denied: '/dev/null'
$ ls -la /dev/null
ls: cannot access '/dev/null': No such file or directory
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79101
|
https://github.com/ansible/ansible/pull/79102
|
cb2e434dd2359a9fe1c00e75431f4abeff7381e8
|
f66016df0e22e1234417dc3538bea75299b4e9eb
| 2022-10-11T14:59:50Z |
python
| 2022-10-17T17:07:04Z |
lib/ansible/modules/copy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: copy
version_added: historical
short_description: Copy files to remote locations
description:
- The C(copy) module copies a file from the local or remote machine to a location on the remote machine.
- Use the M(ansible.builtin.fetch) module to copy files from remote locations to the local box.
- If you need variable interpolation in copied files, use the M(ansible.builtin.template) module.
Using a variable in the C(content) field will result in unpredictable output.
- For Windows targets, use the M(ansible.windows.win_copy) module instead.
options:
src:
description:
- Local path to a file to copy to the remote server.
- This can be absolute or relative.
- If path is a directory, it is copied recursively. In this case, if path ends
with "/", only inside contents of that directory are copied to destination.
Otherwise, if it does not end with "/", the directory itself with all contents
is copied. This behavior is similar to the C(rsync) command line tool.
type: path
content:
description:
- When used instead of C(src), sets the contents of a file directly to the specified value.
- Works only when C(dest) is a file. Creates the file if it does not exist.
- For advanced formatting or if C(content) contains a variable, use the
M(ansible.builtin.template) module.
type: str
version_added: '1.1'
dest:
description:
- Remote absolute path where the file should be copied to.
- If C(src) is a directory, this must be a directory too.
- If C(dest) is a non-existent path and if either C(dest) ends with "/" or C(src) is a directory, C(dest) is created.
- If I(dest) is a relative path, the starting directory is determined by the remote host.
- If C(src) and C(dest) are files, the parent directory of C(dest) is not created and the task fails if it does not already exist.
type: path
required: yes
backup:
description:
- Create a backup file including the timestamp information so you can get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
version_added: '0.7'
force:
description:
- Influence whether the remote file must always be replaced.
- If C(yes), the remote file will be replaced when contents are different than the source.
- If C(no), the file will only be transferred if the destination does not exist.
type: bool
default: yes
version_added: '1.1'
mode:
description:
- The permissions of the destination file or directory.
- For those used to C(/usr/bin/chmod) remember that modes are actually octal numbers.
You must either add a leading zero so that Ansible's YAML parser knows it is an octal number
(like C(0644) or C(01777)) or quote it (like C('644') or C('1777')) so Ansible receives a string
and can do its own conversion from string into number. Giving Ansible a number without following
one of these rules will end up with a decimal number which will have unexpected results.
- As of Ansible 1.8, the mode may be specified as a symbolic mode (for example, C(u+rwx) or C(u=rw,g=r,o=r)).
- As of Ansible 2.3, the mode may also be the special string C(preserve).
- C(preserve) means that the file will be given the same permissions as the source file.
- When doing a recursive copy, see also C(directory_mode).
- If C(mode) is not specified and the destination file B(does not) exist, the default C(umask) on the system will be used
when setting the mode for the newly created file.
- If C(mode) is not specified and the destination file B(does) exist, the mode of the existing file will be used.
- Specifying C(mode) is the best way to ensure files are created with the correct permissions.
See CVE-2020-1736 for further details.
directory_mode:
description:
- When doing a recursive copy set the mode for the directories.
- If this is not set we will use the system defaults.
- The mode is only set on directories which are newly created, and will not affect those that already existed.
type: raw
version_added: '1.5'
remote_src:
description:
- Influence whether C(src) needs to be transferred or already is present remotely.
- If C(no), it will search for C(src) on the controller node.
- If C(yes) it will search for C(src) on the managed (remote) node.
- C(remote_src) supports recursive copying as of version 2.8.
- C(remote_src) only works with C(mode=preserve) as of version 2.6.
- Autodecryption of files does not work when C(remote_src=yes).
type: bool
default: no
version_added: '2.0'
follow:
description:
- This flag indicates that filesystem links in the destination, if they exist, should be followed.
type: bool
default: no
version_added: '1.8'
local_follow:
description:
- This flag indicates that filesystem links in the source tree, if they exist, should be followed.
type: bool
default: yes
version_added: '2.4'
checksum:
description:
- SHA1 checksum of the file being transferred.
- Used to validate that the copy of the file was successful.
- If this is not provided, ansible will use the local calculated checksum of the src file.
type: str
version_added: '2.5'
extends_documentation_fragment:
- decrypt
- files
- validate
- action_common_attributes
- action_common_attributes.files
- action_common_attributes.flow
notes:
- The M(ansible.builtin.copy) module recursively copy facility does not scale to lots (>hundreds) of files.
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.fetch
- module: ansible.builtin.file
- module: ansible.builtin.template
- module: ansible.posix.synchronize
- module: ansible.windows.win_copy
author:
- Ansible Core Team
- Michael DeHaan
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: posix
safe_file_operations:
support: full
vault:
support: full
version_added: '2.2'
'''
EXAMPLES = r'''
- name: Copy file with owner and permissions
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Copy file with owner and permission, using symbolic representation
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u=rw,g=r,o=r
- name: Another symbolic mode example, adding some permissions and removing others
ansible.builtin.copy:
src: /srv/myfiles/foo.conf
dest: /etc/foo.conf
owner: foo
group: foo
mode: u+rw,g-wx,o-rwx
- name: Copy a new "ntp.conf" file into place, backing up the original if it differs from the copied version
ansible.builtin.copy:
src: /mine/ntp.conf
dest: /etc/ntp.conf
owner: root
group: root
mode: '0644'
backup: yes
- name: Copy a new "sudoers" file into place, after passing validation with visudo
ansible.builtin.copy:
src: /mine/sudoers
dest: /etc/sudoers
validate: /usr/sbin/visudo -csf %s
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
- name: Copy using inline content
ansible.builtin.copy:
content: '# This file was moved to /etc/other.conf'
dest: /etc/mine.conf
- name: If follow=yes, /path/to/file will be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: yes
- name: If follow=no, /path/to/link will become a file and be overwritten by contents of foo.conf
ansible.builtin.copy:
src: /etc/foo.conf
dest: /path/to/link # link to /path/to/file
follow: no
'''
RETURN = r'''
dest:
description: Destination file/path.
returned: success
type: str
sample: /path/to/file.txt
src:
description: Source file used for the copy on the target machine.
returned: changed
type: str
sample: /home/httpd/.ansible/tmp/ansible-tmp-1423796390.97-147729857856000/source
md5sum:
description: MD5 checksum of the file after running copy.
returned: when supported
type: str
sample: 2a5aeecc61dc98c4d780b14b330e3282
checksum:
description: SHA1 checksum of the file after running copy.
returned: success
type: str
sample: 6e642bb8dd5c2e027bf21dd923337cbb4214f827
backup_file:
description: Name of backup file created.
returned: changed and if backup=yes
type: str
sample: /path/to/file.txt.2015-02-12@22:09~
gid:
description: Group id of the file, after execution.
returned: success
type: int
sample: 100
group:
description: Group of the file, after execution.
returned: success
type: str
sample: httpd
owner:
description: Owner of the file, after execution.
returned: success
type: str
sample: httpd
uid:
description: Owner id of the file, after execution.
returned: success
type: int
sample: 100
mode:
description: Permissions of the target, after execution.
returned: success
type: str
sample: "0644"
size:
description: Size of the target, after execution.
returned: success
type: int
sample: 1220
state:
description: State of the target, after execution.
returned: success
type: str
sample: file
'''
import errno
import filecmp
import grp
import os
import os.path
import platform
import pwd
import shutil
import stat
import tempfile
import traceback
from ansible.module_utils._text import to_bytes, to_native
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.six import PY3
# The AnsibleModule object
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
# Once we get run_command moved into common, we can move this into a common/files module. We can't
# until then because of the module.run_command() method. We may need to move it into
# basic::AnsibleModule() until then but if so, make it a private function so that we don't have to
# keep it for backwards compatibility later.
def clear_facls(path):
setfacl = get_bin_path('setfacl')
# FIXME "setfacl -b" is available on Linux and FreeBSD. There is "setfacl -D e" on z/OS. Others?
acl_command = [setfacl, '-b', path]
b_acl_command = [to_bytes(x) for x in acl_command]
locale = get_best_parsable_locale(module)
rc, out, err = module.run_command(b_acl_command, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale))
if rc != 0:
raise RuntimeError('Error running "{0}": stdout: "{1}"; stderr: "{2}"'.format(' '.join(b_acl_command), out, err))
def split_pre_existing_dir(dirname):
'''
Return the first pre-existing directory and a list of the new directories that will be created.
'''
head, tail = os.path.split(dirname)
b_head = to_bytes(head, errors='surrogate_or_strict')
if head == '':
return ('.', [tail])
if not os.path.exists(b_head):
if head == '/':
raise AnsibleModuleError(results={'msg': "The '/' directory doesn't exist on this machine."})
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(head)
else:
return (head, [tail])
new_directory_list.append(tail)
return (pre_existing_dir, new_directory_list)
def adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed):
'''
Walk the new directories list and make sure that permissions are as we would expect
'''
if new_directory_list:
working_dir = os.path.join(pre_existing_dir, new_directory_list.pop(0))
directory_args['path'] = working_dir
changed = module.set_fs_attributes_if_different(directory_args, changed)
changed = adjust_recursive_directory_permissions(working_dir, new_directory_list, module, directory_args, changed)
return changed
def chown_recursive(path, module):
changed = False
owner = module.params['owner']
group = module.params['group']
if owner is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = module.set_owner_if_different(dirpath, owner, False)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = module.set_owner_if_different(dir, owner, False)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = module.set_owner_if_different(file, owner, False)
if owner_changed is True:
changed = owner_changed
else:
uid = pwd.getpwnam(owner).pw_uid
for dirpath, dirnames, filenames in os.walk(path):
owner_changed = (os.stat(dirpath).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
owner_changed = (os.stat(dir).st_uid != uid)
if owner_changed is True:
changed = owner_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
owner_changed = (os.stat(file).st_uid != uid)
if owner_changed is True:
changed = owner_changed
if group is not None:
if not module.check_mode:
for dirpath, dirnames, filenames in os.walk(path):
group_changed = module.set_group_if_different(dirpath, group, False)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = module.set_group_if_different(dir, group, False)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = module.set_group_if_different(file, group, False)
if group_changed is True:
changed = group_changed
else:
gid = grp.getgrnam(group).gr_gid
for dirpath, dirnames, filenames in os.walk(path):
group_changed = (os.stat(dirpath).st_gid != gid)
if group_changed is True:
changed = group_changed
for dir in [os.path.join(dirpath, d) for d in dirnames]:
group_changed = (os.stat(dir).st_gid != gid)
if group_changed is True:
changed = group_changed
for file in [os.path.join(dirpath, f) for f in filenames]:
group_changed = (os.stat(file).st_gid != gid)
if group_changed is True:
changed = group_changed
return changed
def copy_diff_files(src, dest, module):
"""Copy files that are different between `src` directory and `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
diff_files = filecmp.dircmp(src, dest).diff_files
if len(diff_files):
changed = True
if not module.check_mode:
for item in diff_files:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
else:
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
changed = True
return changed
def copy_left_only(src, dest, module):
"""Copy files that exist in `src` directory only to the `dest` directory."""
changed = False
owner = module.params['owner']
group = module.params['group']
local_follow = module.params['local_follow']
left_only = filecmp.dircmp(src, dest).left_only
if len(left_only):
changed = True
if not module.check_mode:
for item in left_only:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is True:
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
if os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is True:
shutil.copyfile(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path) and local_follow is False:
linkto = os.readlink(b_src_item_path)
os.symlink(linkto, b_dest_item_path)
if not os.path.islink(b_src_item_path) and os.path.isfile(b_src_item_path):
shutil.copyfile(b_src_item_path, b_dest_item_path)
shutil.copymode(b_src_item_path, b_dest_item_path)
if owner is not None:
module.set_owner_if_different(b_dest_item_path, owner, False)
if group is not None:
module.set_group_if_different(b_dest_item_path, group, False)
if not os.path.islink(b_src_item_path) and os.path.isdir(b_src_item_path):
shutil.copytree(b_src_item_path, b_dest_item_path, symlinks=not local_follow)
chown_recursive(b_dest_item_path, module)
changed = True
return changed
def copy_common_dirs(src, dest, module):
changed = False
common_dirs = filecmp.dircmp(src, dest).common_dirs
for item in common_dirs:
src_item_path = os.path.join(src, item)
dest_item_path = os.path.join(dest, item)
b_src_item_path = to_bytes(src_item_path, errors='surrogate_or_strict')
b_dest_item_path = to_bytes(dest_item_path, errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src_item_path, b_dest_item_path, module)
left_only_changed = copy_left_only(b_src_item_path, b_dest_item_path, module)
if diff_files_changed or left_only_changed:
changed = True
# recurse into subdirectory
changed = copy_common_dirs(os.path.join(src, item), os.path.join(dest, item), module) or changed
return changed
def main():
global module
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path'),
_original_basename=dict(type='str'), # used to handle 'dest is a directory' via template, a slight hack
content=dict(type='str', no_log=True),
dest=dict(type='path', required=True),
backup=dict(type='bool', default=False),
force=dict(type='bool', default=True),
validate=dict(type='str'),
directory_mode=dict(type='raw'),
remote_src=dict(type='bool'),
local_follow=dict(type='bool'),
checksum=dict(type='str'),
follow=dict(type='bool', default=False),
),
add_file_common_args=True,
supports_check_mode=True,
)
src = module.params['src']
b_src = to_bytes(src, errors='surrogate_or_strict')
dest = module.params['dest']
# Make sure we always have a directory component for later processing
if os.path.sep not in dest:
dest = '.{0}{1}'.format(os.path.sep, dest)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
backup = module.params['backup']
force = module.params['force']
_original_basename = module.params.get('_original_basename', None)
validate = module.params.get('validate', None)
follow = module.params['follow']
local_follow = module.params['local_follow']
mode = module.params['mode']
owner = module.params['owner']
group = module.params['group']
remote_src = module.params['remote_src']
checksum = module.params['checksum']
if not os.path.exists(b_src):
module.fail_json(msg="Source %s not found" % (src))
if not os.access(b_src, os.R_OK):
module.fail_json(msg="Source %s not readable" % (src))
# Preserve is usually handled in the action plugin but mode + remote_src has to be done on the
# remote host
if module.params['mode'] == 'preserve':
module.params['mode'] = '0%03o' % stat.S_IMODE(os.stat(b_src).st_mode)
mode = module.params['mode']
checksum_dest = None
if os.path.isfile(src):
checksum_src = module.sha1(src)
else:
checksum_src = None
# Backwards compat only. This will be None in FIPS mode
try:
if os.path.isfile(src):
md5sum_src = module.md5(src)
else:
md5sum_src = None
except ValueError:
md5sum_src = None
changed = False
if checksum and checksum_src != checksum:
module.fail_json(
msg='Copied file does not match the expected checksum. Transfer failed.',
checksum=checksum_src,
expected_checksum=checksum
)
# Special handling for recursive copy - create intermediate dirs
if dest.endswith(os.sep):
if _original_basename:
dest = os.path.join(dest, _original_basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
dirname = os.path.dirname(dest)
b_dirname = to_bytes(dirname, errors='surrogate_or_strict')
if not os.path.exists(b_dirname):
try:
(pre_existing_dir, new_directory_list) = split_pre_existing_dir(dirname)
except AnsibleModuleError as e:
e.result['msg'] += ' Could not copy to {0}'.format(dest)
module.fail_json(**e.results)
os.makedirs(b_dirname)
changed = True
directory_args = module.load_file_common_arguments(module.params)
directory_mode = module.params["directory_mode"]
if directory_mode is not None:
directory_args['mode'] = directory_mode
else:
directory_args['mode'] = None
adjust_recursive_directory_permissions(pre_existing_dir, new_directory_list, module, directory_args, changed)
if os.path.isdir(b_dest):
basename = os.path.basename(src)
if _original_basename:
basename = _original_basename
dest = os.path.join(dest, basename)
b_dest = to_bytes(dest, errors='surrogate_or_strict')
if os.path.exists(b_dest):
if os.path.islink(b_dest) and follow:
b_dest = os.path.realpath(b_dest)
dest = to_native(b_dest, errors='surrogate_or_strict')
if not force:
module.exit_json(msg="file already exists", src=src, dest=dest, changed=False)
if os.access(b_dest, os.R_OK) and os.path.isfile(b_dest):
checksum_dest = module.sha1(dest)
else:
if not os.path.exists(os.path.dirname(b_dest)):
try:
# os.path.exists() can return false in some
# circumstances where the directory does not have
# the execute bit for the current user set, in
# which case the stat() call will raise an OSError
os.stat(os.path.dirname(b_dest))
except OSError as e:
if "permission denied" in to_native(e).lower():
module.fail_json(msg="Destination directory %s is not accessible" % (os.path.dirname(dest)))
module.fail_json(msg="Destination directory %s does not exist" % (os.path.dirname(dest)))
if not os.access(os.path.dirname(b_dest), os.W_OK) and not module.params['unsafe_writes']:
module.fail_json(msg="Destination %s not writable" % (os.path.dirname(dest)))
backup_file = None
if checksum_src != checksum_dest or os.path.islink(b_dest):
if not module.check_mode:
try:
if backup:
if os.path.exists(b_dest):
backup_file = module.backup_local(dest)
# allow for conversion from symlink.
if os.path.islink(b_dest):
os.unlink(b_dest)
open(b_dest, 'w').close()
if validate:
# if we have a mode, make sure we set it on the temporary
# file source as some validations may require it
if mode is not None:
module.set_mode_if_different(src, mode, False)
if owner is not None:
module.set_owner_if_different(src, owner, False)
if group is not None:
module.set_group_if_different(src, group, False)
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % src)
if rc != 0:
module.fail_json(msg="failed to validate", exit_status=rc, stdout=out, stderr=err)
b_mysrc = b_src
if remote_src and os.path.isfile(b_src):
_, b_mysrc = tempfile.mkstemp(dir=os.path.dirname(b_dest))
shutil.copyfile(b_src, b_mysrc)
try:
shutil.copystat(b_src, b_mysrc)
except OSError as err:
if err.errno == errno.ENOSYS and mode == "preserve":
module.warn("Unable to copy stats {0}".format(to_native(b_src)))
else:
raise
# might be needed below
if PY3 and hasattr(os, 'listxattr'):
try:
src_has_acls = 'system.posix_acl_access' in os.listxattr(src)
except Exception as e:
# assume unwanted ACLs by default
src_has_acls = True
module.atomic_move(b_mysrc, dest, unsafe_writes=module.params['unsafe_writes'])
if PY3 and hasattr(os, 'listxattr') and platform.system() == 'Linux' and not remote_src:
# atomic_move used above to copy src into dest might, in some cases,
# use shutil.copy2 which in turn uses shutil.copystat.
# Since Python 3.3, shutil.copystat copies file extended attributes:
# https://docs.python.org/3/library/shutil.html#shutil.copystat
# os.listxattr (along with others) was added to handle the operation.
# This means that on Python 3 we are copying the extended attributes which includes
# the ACLs on some systems - further limited to Linux as the documentation above claims
# that the extended attributes are copied only on Linux. Also, os.listxattr is only
# available on Linux.
# If not remote_src, then the file was copied from the controller. In that
# case, any filesystem ACLs are artifacts of the copy rather than preservation
# of existing attributes. Get rid of them:
if src_has_acls:
# FIXME If dest has any default ACLs, there are not applied to src now because
# they were overridden by copystat. Should/can we do anything about this?
# 'system.posix_acl_default' in os.listxattr(os.path.dirname(b_dest))
try:
clear_facls(dest)
except ValueError as e:
if 'setfacl' in to_native(e):
# No setfacl so we're okay. The controller couldn't have set a facl
# without the setfacl command
pass
else:
raise
except RuntimeError as e:
# setfacl failed.
if 'Operation not supported' in to_native(e):
# The file system does not support ACLs.
pass
else:
raise
except (IOError, OSError):
module.fail_json(msg="failed to copy: %s to %s" % (src, dest), traceback=traceback.format_exc())
changed = True
# If neither have checksums, both src and dest are directories.
if checksum_src is None and checksum_dest is None:
if remote_src and os.path.isdir(module.params['src']):
b_src = to_bytes(module.params['src'], errors='surrogate_or_strict')
b_dest = to_bytes(module.params['dest'], errors='surrogate_or_strict')
if src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode:
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
chown_recursive(dest, module)
changed = True
if not src.endswith(os.path.sep) and os.path.isdir(module.params['dest']):
b_basename = to_bytes(os.path.basename(src), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
shutil.copytree(b_src, b_dest, symlinks=not local_follow)
changed = True
chown_recursive(dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
if os.path.exists(b_dest):
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if diff_files_changed or left_only_changed or common_dirs_changed or owner_group_changed:
changed = True
if not src.endswith(os.path.sep) and not os.path.exists(module.params['dest']):
b_basename = to_bytes(os.path.basename(module.params['src']), errors='surrogate_or_strict')
b_dest = to_bytes(os.path.join(b_dest, b_basename), errors='surrogate_or_strict')
if not module.check_mode and not os.path.exists(b_dest):
os.makedirs(b_dest)
changed = True
b_src = to_bytes(os.path.join(module.params['src'], ""), errors='surrogate_or_strict')
diff_files_changed = copy_diff_files(b_src, b_dest, module)
left_only_changed = copy_left_only(b_src, b_dest, module)
common_dirs_changed = copy_common_dirs(b_src, b_dest, module)
owner_group_changed = chown_recursive(b_dest, module)
if module.check_mode and not os.path.exists(b_dest):
changed = True
res_args = dict(
dest=dest, src=src, md5sum=md5sum_src, checksum=checksum_src, changed=changed
)
if backup_file:
res_args['backup_file'] = backup_file
if not module.check_mode:
file_args = module.load_file_common_arguments(module.params, path=dest)
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'])
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,101 |
ansible.builtin.copy deletes /dev/null instead of copying when called as root
|
### Summary
When calling `ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b`, ansible complains that the _target_ file doesn't exist, and then, if the target file exists, removes the source `/dev/null` instead of copying it onto the target.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/elavarde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/elavarde/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
RHEL 8
### Steps to Reproduce
```
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
touch /tmp/xxx
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
```
### Expected Results
I expected the target file to be emptied by copy (similarly to what `cp /dev/null somefile` does). This works also if the _target_ file pre-exists and I call the command _without_ become.
### Actual Results
```console
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
localhost | FAILED! => {
"changed": false,
"msg": "path /tmp/xxx does not exist",
"path": "/tmp/xxx"
}
$ touch /tmp/xxx
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
[WARNING]: Error deleting remote temporary files (rc: 1, stderr: /bin/sh: /dev/null: Permission denied })
Process WorkerProcess-1:
localhost | CHANGED => {
"changed": true,
"checksum": null,
"dest": "/tmp/xxx",
"gid": 1000,
"group": "elavarde",
"md5sum": null,
"mode": "0664",
"owner": "elavarde",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 0,
"src": "/dev/null",
"state": "file",
"uid": 1000
}
Traceback (most recent call last):
File "/usr/lib64/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/site-packages/ansible/executor/process/worker.py", line 138, in run
sys.stdout = sys.stderr = open(os.devnull, 'w')
PermissionError: [Errno 13] Permission denied: '/dev/null'
$ ls -la /dev/null
ls: cannot access '/dev/null': No such file or directory
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79101
|
https://github.com/ansible/ansible/pull/79102
|
cb2e434dd2359a9fe1c00e75431f4abeff7381e8
|
f66016df0e22e1234417dc3538bea75299b4e9eb
| 2022-10-11T14:59:50Z |
python
| 2022-10-17T17:07:04Z |
test/integration/targets/copy/tasks/main.yml
|
- block:
- name: Create a local temporary directory
shell: mktemp -d /tmp/ansible_test.XXXXXXXXX
register: tempfile_result
delegate_to: localhost
- set_fact:
local_temp_dir: '{{ tempfile_result.stdout }}'
remote_dir: '{{ remote_tmp_dir }}/copy'
symlinks:
ansible-test-abs-link: /tmp/ansible-test-abs-link
ansible-test-abs-link-dir: /tmp/ansible-test-abs-link-dir
circles: ../
invalid: invalid
invalid2: ../invalid
out_of_tree_circle: /tmp/ansible-test-link-dir/out_of_tree_circle
subdir3: ../subdir2/subdir3
bar.txt: ../bar.txt
- file: path={{local_temp_dir}} state=directory
name: ensure temp dir exists
# file cannot do this properly, use command instead
- name: Create symbolic link
command: "ln -s '{{ item.value }}' '{{ item.key }}'"
args:
chdir: '{{role_path}}/files/subdir/subdir1'
with_dict: "{{ symlinks }}"
delegate_to: localhost
- name: Create remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
register: user
- name: Check sudoers dir
stat:
path: /etc/sudoers.d
register: etc_sudoers
- name: Set sudoers.d path fact
set_fact:
sudoers_d_file: "{{ '/etc/sudoers.d' if etc_sudoers.stat.exists else '/usr/local/etc/sudoers.d' }}/{{ remote_unprivileged_user }}"
- name: Create sudoers file
copy:
dest: "{{ sudoers_d_file }}"
content: "{{ remote_unprivileged_user }} ALL=(ALL) NOPASSWD: ALL"
- file:
path: "{{ user.home }}/.ssh"
owner: '{{ remote_unprivileged_user }}'
state: directory
mode: 0700
- name: Duplicate authorized_keys
copy:
src: $HOME/.ssh/authorized_keys
dest: '{{ user.home }}/.ssh/authorized_keys'
owner: '{{ remote_unprivileged_user }}'
mode: 0600
remote_src: yes
- file:
path: "{{ remote_dir }}"
state: directory
remote_user: '{{ remote_unprivileged_user }}'
# execute tests tasks using an unprivileged user, this is useful to avoid
# local/remote ambiguity when controller and managed hosts are identical.
- import_tasks: tests.yml
remote_user: '{{ remote_unprivileged_user }}'
- import_tasks: acls.yml
when: ansible_system == 'Linux'
- import_tasks: selinux.yml
when: ansible_os_family == 'RedHat' and ansible_selinux.get('mode') == 'enforcing'
- import_tasks: no_log.yml
delegate_to: localhost
- import_tasks: check_mode.yml
# https://github.com/ansible/ansible/issues/57618
- name: Test diff contents
copy:
content: 'Ansible managed\n'
dest: "{{ local_temp_dir }}/file.txt"
diff: yes
register: diff_output
- assert:
that:
- 'diff_output.diff[0].before == ""'
- '"Ansible managed" in diff_output.diff[0].after'
always:
- name: Cleaning
file:
path: '{{ local_temp_dir }}'
state: absent
delegate_to: localhost
- name: Remove symbolic link
file:
path: '{{ role_path }}/files/subdir/subdir1/{{ item.key }}'
state: absent
delegate_to: localhost
with_dict: "{{ symlinks }}"
- name: Remote unprivileged remote user
user:
name: '{{ remote_unprivileged_user }}'
state: absent
remove: yes
force: yes
- name: Remove sudoers.d file
file:
path: "{{ sudoers_d_file }}"
state: absent
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,101 |
ansible.builtin.copy deletes /dev/null instead of copying when called as root
|
### Summary
When calling `ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b`, ansible complains that the _target_ file doesn't exist, and then, if the target file exists, removes the source `/dev/null` instead of copying it onto the target.
### Issue Type
Bug Report
### Component Name
ansible.builtin.copy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/elavarde/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /home/elavarde/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
RHEL 8
### Steps to Reproduce
```
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
touch /tmp/xxx
ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
```
### Expected Results
I expected the target file to be emptied by copy (similarly to what `cp /dev/null somefile` does). This works also if the _target_ file pre-exists and I call the command _without_ become.
### Actual Results
```console
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
localhost | FAILED! => {
"changed": false,
"msg": "path /tmp/xxx does not exist",
"path": "/tmp/xxx"
}
$ touch /tmp/xxx
$ ansible -m ansible.builtin.copy -a 'src=/dev/null dest=/tmp/xxx remote_src=true' localhost -b
[WARNING]: Error deleting remote temporary files (rc: 1, stderr: /bin/sh: /dev/null: Permission denied })
Process WorkerProcess-1:
localhost | CHANGED => {
"changed": true,
"checksum": null,
"dest": "/tmp/xxx",
"gid": 1000,
"group": "elavarde",
"md5sum": null,
"mode": "0664",
"owner": "elavarde",
"secontext": "unconfined_u:object_r:user_tmp_t:s0",
"size": 0,
"src": "/dev/null",
"state": "file",
"uid": 1000
}
Traceback (most recent call last):
File "/usr/lib64/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/site-packages/ansible/executor/process/worker.py", line 138, in run
sys.stdout = sys.stderr = open(os.devnull, 'w')
PermissionError: [Errno 13] Permission denied: '/dev/null'
$ ls -la /dev/null
ls: cannot access '/dev/null': No such file or directory
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79101
|
https://github.com/ansible/ansible/pull/79102
|
cb2e434dd2359a9fe1c00e75431f4abeff7381e8
|
f66016df0e22e1234417dc3538bea75299b4e9eb
| 2022-10-11T14:59:50Z |
python
| 2022-10-17T17:07:04Z |
test/integration/targets/copy/tasks/src_remote_file_is_not_file.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,493 |
Python 3 page in appendix is outdated
|
### Summary
This page still has older python versions listed:
https://docs.ansible.com/ansible/latest/reference_appendices/python_3_support.html
Specifically this note is incorrect:
"On the controller we support Python 3.5 or greater and Python 2.7 or greater. Module-side, we support Python 3.5 or greater and Python 2.6 or greater."
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/python_3_support.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78493
|
https://github.com/ansible/ansible/pull/79173
|
d82d232d0715717d0569d3c645d3e47cad14b4db
|
d6c268378f248c5892298627ad74562593c6300b
| 2022-08-10T16:00:48Z |
python
| 2022-10-20T17:55:41Z |
docs/docsite/rst/reference_appendices/python_3_support.rst
|
================
Python 3 Support
================
Ansible 2.5 and above work with Python 3. Previous to 2.5, using Python 3 was
considered a tech preview. This topic discusses how to set up your controller and managed machines
to use Python 3.
.. note:: On the controller we support Python 3.5 or greater and Python 2.7 or greater. Module-side, we support Python 3.5 or greater and Python 2.6 or greater.
On the controller side
----------------------
The easiest way to run :command:`/usr/bin/ansible` under Python 3 is to install it with the Python3
version of pip. This will make the default :command:`/usr/bin/ansible` run with Python3:
.. code-block:: shell
$ pip3 install ansible
$ ansible --version | grep "python version"
python version = 3.6.2 (default, Sep 22 2017, 08:28:09) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]
If you are running Ansible :ref:`from_source` and want to use Python 3 with your source checkout, run your
command through ``python3``. For example:
.. code-block:: shell
$ source ./hacking/env-setup
$ python3 $(which ansible) localhost -m ping
$ python3 $(which ansible-playbook) sample-playbook.yml
.. note:: Individual Linux distribution packages may be packaged for Python2 or Python3. When running from
distro packages you'll only be able to use Ansible with the Python version for which it was
installed. Sometimes distros will provide a means of installing for several Python versions
(through a separate package or through some commands that are run after install). You'll need to check
with your distro to see if that applies in your case.
Using Python 3 on the managed machines with commands and playbooks
------------------------------------------------------------------
* Ansible will automatically detect and use Python 3 on many platforms that ship with it. To explicitly configure a
Python 3 interpreter, set the ``ansible_python_interpreter`` inventory variable at a group or host level to the
location of a Python 3 interpreter, such as :command:`/usr/bin/python3`. The default interpreter path may also be
set in ``ansible.cfg``.
.. seealso:: :ref:`interpreter_discovery` for more information.
.. code-block:: ini
# Example inventory that makes an alias for localhost that uses Python3
localhost-py3 ansible_host=localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
# Example of setting a group of hosts to use Python3
[py3_hosts]
ubuntu16
fedora27
[py3_hosts:vars]
ansible_python_interpreter=/usr/bin/python3
.. seealso:: :ref:`intro_inventory` for more information.
* Run your command or playbook:
.. code-block:: shell
$ ansible localhost-py3 -m ping
$ ansible-playbook sample-playbook.yml
Note that you can also use the `-e` command line option to manually
set the python interpreter when you run a command. This can be useful if you want to test whether
a specific module or playbook has any bugs under Python 3. For example:
.. code-block:: shell
$ ansible localhost -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
$ ansible-playbook sample-playbook.yml -e 'ansible_python_interpreter=/usr/bin/python3'
What to do if an incompatibility is found
-----------------------------------------
We have spent several releases squashing bugs and adding new tests so that Ansible's core feature
set runs under both Python 2 and Python 3. However, bugs may still exist in edge cases and many of
the modules shipped with Ansible are maintained by the community and not all of those may be ported
yet.
If you find a bug running under Python 3 you can submit a bug report on `Ansible's GitHub project
<https://github.com/ansible/ansible/issues/>`_. Be sure to mention Python3 in the bug report so
that the right people look at it.
If you would like to fix the code and submit a pull request on github, you can
refer to :ref:`developing_python_3` for information on how we fix
common Python3 compatibility issues in the Ansible codebase.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,988 |
File module changes mode of src when state is link
|
### Summary
The file module is supposed to only affect `dest:`, but in cases where `state: link` and a `mode:` is supplied, the mode of `src:` — if it exists — is changed.
One could argue whether documenting the current behavior is sufficient vs. whether it's better to "fix" the file module, but it isn't clear what "fix" would mean: (1) ignore `mode:` when `state: link`; (2) fail if `state: link` and a `mode:` is given if (a) `src:` does not exist and/or (b) mode of existing `src:` does not match `mode:`. There are plenty of wrong answers available.
### Issue Type
Documentation Report
### Component Name
file
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/utoddl/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/venv-python38-ansible-core-212/lib/python3.8/site-packages/ansible
ansible collection location = /home/utoddl/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/venv-python38-ansible-core-212/bin/ansible
python version = 3.8.12 (default, Sep 16 2021, 10:46:05) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME_PLUGIN_PATH(env: ANSIBLE_BECOME_PLUGINS) = ['/usr/share/ansible/plugins/become']
CACHE_PLUGIN(/etc/ansible/ansible.cfg) = memory
DEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = ['/usr/share/ansible/plugins/action']
DEFAULT_CACHE_PLUGIN_PATH(env: ANSIBLE_CACHE_PLUGINS) = ['/usr/share/ansible/plugins/cache']
DEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = ['/usr/share/ansible/plugins/callback']
DEFAULT_CONNECTION_PLUGIN_PATH(env: ANSIBLE_CONNECTION_PLUGINS) = ['/usr/share/ansible/plugins/connection']
DEFAULT_FILTER_PLUGIN_PATH(env: ANSIBLE_FILTER_PLUGINS) = ['/usr/share/ansible/plugins/filter']
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 5
DEFAULT_GATHERING(env: ANSIBLE_GATHERING) = smart
DEFAULT_GATHER_SUBSET(env: ANSIBLE_GATHER_SUBSET) = ['!all', 'virtual', 'network']
DEFAULT_HOST_LIST(env: ANSIBLE_INVENTORY) = ['…']
DEFAULT_INVENTORY_PLUGIN_PATH(env: ANSIBLE_INVENTORY_PLUGINS) = ['/usr/share/ansible/plugins/inventory']
DEFAULT_LOG_PATH(env: ANSIBLE_LOG_PATH) = /var/log/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(env: ANSIBLE_LOOKUP_PLUGINS) = ['/usr/share/ansible/plugins/lookup']
DEFAULT_POLL_INTERVAL(/etc/ansible/ansible.cfg) = 15
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = yaml
DEFAULT_STRATEGY_PLUGIN_PATH(env: ANSIBLE_STRATEGY_PLUGINS) = ['/usr/share/ansible/plugins/strategy']
DEFAULT_TERMINAL_PLUGIN_PATH(env: ANSIBLE_TERMINAL_PLUGINS) = ['/usr/share/ansible/plugins/terminal']
DEFAULT_TEST_PLUGIN_PATH(env: ANSIBLE_TEST_PLUGINS) = ['/usr/share/ansible/plugins/test']
DEFAULT_TIMEOUT(env: ANSIBLE_TIMEOUT) = 30
DEFAULT_TRANSPORT(/etc/ansible/ansible.cfg) = smart
DEFAULT_VARS_PLUGIN_PATH(env: ANSIBLE_VARS_PLUGINS) = ['/usr/share/ansible/plugins/vars']
DEFAULT_VAULT_IDENTITY(env: ANSIBLE_VAULT_IDENTITY) = mw
DEFAULT_VAULT_IDENTITY_LIST(env: ANSIBLE_VAULT_IDENTITY_LIST) = […]
DISPLAY_SKIPPED_HOSTS(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
HOST_KEY_CHECKING(env: ANSIBLE_HOST_KEY_CHECKING) = False
INTERPRETER_PYTHON(env: ANSIBLE_PYTHON_INTERPRETER) = auto
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(env: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS) = silently
BECOME:
======
CACHE:
=====
CALLBACK:
========
default:
_______
display_skipped_hosts(env: ANSIBLE_DISPLAY_SKIPPED_HOSTS) = True
CLICONF:
=======
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(env: ANSIBLE_HOST_KEY_CHECKING) = False
ssh:
___
host_key_checking(env: ANSIBLE_HOST_KEY_CHECKING) = False
pipelining(env: ANSIBLE_SSH_PIPELINING) = False
timeout(env: ANSIBLE_TIMEOUT) = 30
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
sh:
__
remote_tmp(/etc/ansible/ansible.cfg) = /tmp/${USER}/.ansible/tmp
VARS:
====
```
### OS / Environment
Red Hat Enterprise Linux release 8.6 (Ootpa)
### Additional Information
Without this change, users may reasonably expect symbolic links themselves to have the specified mode — for example 0777 or -rwxrwxrwx — when in fact the target of the link (i.e. `src:`) will become world-writable / clobberable.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78988
|
https://github.com/ansible/ansible/pull/79182
|
bcdc2286e853240372b241fd2a178c66f3bc494c
|
465480f755b7f4bad72090bbb350c1bd993505ae
| 2022-10-03T14:53:03Z |
python
| 2022-10-24T12:13:52Z |
lib/ansible/modules/file.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: file
version_added: historical
short_description: Manage files and file properties
extends_documentation_fragment: [files, action_common_attributes]
description:
- Set attributes of files, symlinks or directories.
- Alternatively, remove files, symlinks or directories.
- Many other modules support the same options as the C(file) module - including M(ansible.builtin.copy),
M(ansible.builtin.template), and M(ansible.builtin.assemble).
- For Windows targets, use the M(ansible.windows.win_file) module instead.
options:
path:
description:
- Path to the file being managed.
type: path
required: yes
aliases: [ dest, name ]
state:
description:
- If C(absent), directories will be recursively deleted, and files or symlinks will
be unlinked. In the case of a directory, if C(diff) is declared, you will see the files and folders deleted listed
under C(path_contents). Note that C(absent) will not cause C(file) to fail if the C(path) does
not exist as the state did not change.
- If C(directory), all intermediate subdirectories will be created if they
do not exist. Since Ansible 1.7 they will be created with the supplied permissions.
- If C(file), with no other options, returns the current state of C(path).
- If C(file), even with other options (such as C(mode)), the file will be modified if it exists but will NOT be created if it does not exist.
Set to C(touch) or use the M(ansible.builtin.copy) or M(ansible.builtin.template) module if you want to create the file if it does not exist.
- If C(hard), the hard link will be created or changed.
- If C(link), the symbolic link will be created or changed.
- If C(touch) (new in 1.4), an empty file will be created if the file does not
exist, while an existing file or directory will receive updated file access and
modification times (similar to the way C(touch) works from the command line).
- Default is the current state of the file if it exists, C(directory) if C(recurse=yes), or C(file) otherwise.
type: str
choices: [ absent, directory, file, hard, link, touch ]
src:
description:
- Path of the file to link to.
- This applies only to C(state=link) and C(state=hard).
- For C(state=link), this will also accept a non-existing path.
- Relative paths are relative to the file being created (C(path)) which is how
the Unix command C(ln -s SRC DEST) treats relative paths.
type: path
recurse:
description:
- Recursively set the specified file attributes on directory contents.
- This applies only when C(state) is set to C(directory).
type: bool
default: no
version_added: '1.1'
force:
description:
- >
Force the creation of the symlinks in two cases: the source file does
not exist (but will appear later); the destination exists and is a file (so, we need to unlink the
C(path) file and create symlink to the C(src) file in place of it).
type: bool
default: no
follow:
description:
- This flag indicates that filesystem links, if they exist, should be followed.
- Previous to Ansible 2.5, this was C(no) by default.
type: bool
default: yes
version_added: '1.8'
modification_time:
description:
- This parameter indicates the time the file's modification time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is None meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: "2.7"
modification_time_format:
description:
- When used with C(modification_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
access_time:
description:
- This parameter indicates the time the file's access time should be set to.
- Should be C(preserve) when no modification is required, C(YYYYMMDDHHMM.SS) when using default time format, or C(now).
- Default is C(None) meaning that C(preserve) is the default for C(state=[file,directory,link,hard]) and C(now) is default for C(state=touch).
type: str
version_added: '2.7'
access_time_format:
description:
- When used with C(access_time), indicates the time format that must be used.
- Based on default Python format (see time.strftime doc).
type: str
default: "%Y%m%d%H%M.%S"
version_added: '2.7'
seealso:
- module: ansible.builtin.assemble
- module: ansible.builtin.copy
- module: ansible.builtin.stat
- module: ansible.builtin.template
- module: ansible.windows.win_file
attributes:
check_mode:
support: full
diff_mode:
details: permissions and ownership will be shown but file contents on absent/touch will not.
support: partial
platform:
platforms: posix
author:
- Ansible Core Team
- Michael DeHaan
'''
EXAMPLES = r'''
- name: Change file ownership, group and permissions
ansible.builtin.file:
path: /etc/foo.conf
owner: foo
group: foo
mode: '0644'
- name: Give insecure permissions to an existing file
ansible.builtin.file:
path: /work
owner: root
group: root
mode: '1777'
- name: Create a symbolic link
ansible.builtin.file:
src: /file/to/link/to
dest: /path/to/symlink
owner: foo
group: foo
state: link
- name: Create two hard links
ansible.builtin.file:
src: '/tmp/{{ item.src }}'
dest: '{{ item.dest }}'
state: hard
loop:
- { src: x, dest: y }
- { src: z, dest: k }
- name: Touch a file, using symbolic modes to set the permissions (equivalent to 0644)
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u=rw,g=r,o=r
- name: Touch the same file, but add/remove some permissions
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
- name: Touch again the same file, but do not change times this makes the task idempotent
ansible.builtin.file:
path: /etc/foo.conf
state: touch
mode: u+rw,g-wx,o-rwx
modification_time: preserve
access_time: preserve
- name: Create a directory if it does not exist
ansible.builtin.file:
path: /etc/some_directory
state: directory
mode: '0755'
- name: Update modification and access time of given file
ansible.builtin.file:
path: /etc/some_file
state: file
modification_time: now
access_time: now
- name: Set access time based on seconds from epoch value
ansible.builtin.file:
path: /etc/another_file
state: file
access_time: '{{ "%Y%m%d%H%M.%S" | strftime(stat_var.stat.atime) }}'
- name: Recursively change ownership of a directory
ansible.builtin.file:
path: /etc/foo
state: directory
recurse: yes
owner: foo
group: foo
- name: Remove file (delete file)
ansible.builtin.file:
path: /etc/foo.txt
state: absent
- name: Recursively remove directory
ansible.builtin.file:
path: /etc/foo
state: absent
'''
RETURN = r'''
dest:
description: Destination file/path, equal to the value passed to I(path).
returned: state=touch, state=hard, state=link
type: str
sample: /path/to/file.txt
path:
description: Destination file/path, equal to the value passed to I(path).
returned: state=absent, state=directory, state=file
type: str
sample: /path/to/file.txt
'''
import errno
import os
import shutil
import sys
import time
from pwd import getpwnam, getpwuid
from grp import getgrnam, getgrgid
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
# There will only be a single AnsibleModule object per module
module = None
class AnsibleModuleError(Exception):
def __init__(self, results):
self.results = results
def __repr__(self):
return 'AnsibleModuleError(results={0})'.format(self.results)
class ParameterError(AnsibleModuleError):
pass
class Sentinel(object):
def __new__(cls, *args, **kwargs):
return cls
def _ansible_excepthook(exc_type, exc_value, tb):
# Using an exception allows us to catch it if the calling code knows it can recover
if issubclass(exc_type, AnsibleModuleError):
module.fail_json(**exc_value.results)
else:
sys.__excepthook__(exc_type, exc_value, tb)
def additional_parameter_handling(params):
"""Additional parameter validation and reformatting"""
# When path is a directory, rewrite the pathname to be the file inside of the directory
# TODO: Why do we exclude link? Why don't we exclude directory? Should we exclude touch?
# I think this is where we want to be in the future:
# when isdir(path):
# if state == absent: Remove the directory
# if state == touch: Touch the directory
# if state == directory: Assert the directory is the same as the one specified
# if state == file: place inside of the directory (use _original_basename)
# if state == link: place inside of the directory (use _original_basename. Fallback to src?)
# if state == hard: place inside of the directory (use _original_basename. Fallback to src?)
if (params['state'] not in ("link", "absent") and os.path.isdir(to_bytes(params['path'], errors='surrogate_or_strict'))):
basename = None
if params['_original_basename']:
basename = params['_original_basename']
elif params['src']:
basename = os.path.basename(params['src'])
if basename:
params['path'] = os.path.join(params['path'], basename)
# state should default to file, but since that creates many conflicts,
# default state to 'current' when it exists.
prev_state = get_state(to_bytes(params['path'], errors='surrogate_or_strict'))
if params['state'] is None:
if prev_state != 'absent':
params['state'] = prev_state
elif params['recurse']:
params['state'] = 'directory'
else:
params['state'] = 'file'
# make sure the target path is a directory when we're doing a recursive operation
if params['recurse'] and params['state'] != 'directory':
raise ParameterError(results={"msg": "recurse option requires state to be 'directory'",
"path": params["path"]})
# Fail if 'src' but no 'state' is specified
if params['src'] and params['state'] not in ('link', 'hard'):
raise ParameterError(results={'msg': "src option requires state to be 'link' or 'hard'",
'path': params['path']})
def get_state(path):
''' Find out current state '''
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
if os.path.lexists(b_path):
if os.path.islink(b_path):
return 'link'
elif os.path.isdir(b_path):
return 'directory'
elif os.stat(b_path).st_nlink > 1:
return 'hard'
# could be many other things, but defaulting to file
return 'file'
return 'absent'
except OSError as e:
if e.errno == errno.ENOENT: # It may already have been removed
return 'absent'
else:
raise
# This should be moved into the common file utilities
def recursive_set_attributes(b_path, follow, file_args, mtime, atime):
changed = False
try:
for b_root, b_dirs, b_files in os.walk(b_path):
for b_fsobj in b_dirs + b_files:
b_fsname = os.path.join(b_root, b_fsobj)
if not os.path.islink(b_fsname):
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
else:
# Change perms on the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
if follow:
b_fsname = os.path.join(b_root, os.readlink(b_fsname))
# The link target could be nonexistent
if os.path.exists(b_fsname):
if os.path.isdir(b_fsname):
# Link is a directory so change perms on the directory's contents
changed |= recursive_set_attributes(b_fsname, follow, file_args, mtime, atime)
# Change perms on the file pointed to by the link
tmp_file_args = file_args.copy()
tmp_file_args['path'] = to_native(b_fsname, errors='surrogate_or_strict')
changed |= module.set_fs_attributes_if_different(tmp_file_args, changed, expand=False)
changed |= update_timestamp_for_file(tmp_file_args['path'], mtime, atime)
except RuntimeError as e:
# on Python3 "RecursionError" is raised which is derived from "RuntimeError"
# TODO once this function is moved into the common file utilities, this should probably raise more general exception
raise AnsibleModuleError(
results={'msg': "Could not recursively set attributes on %s. Original error was: '%s'" % (to_native(b_path), to_native(e))}
)
return changed
def initial_diff(path, state, prev_state):
diff = {'before': {'path': path},
'after': {'path': path},
}
if prev_state != state:
diff['before']['state'] = prev_state
diff['after']['state'] = state
if state == 'absent' and prev_state == 'directory':
walklist = {
'directories': [],
'files': [],
}
b_path = to_bytes(path, errors='surrogate_or_strict')
for base_path, sub_folders, files in os.walk(b_path):
for folder in sub_folders:
folderpath = os.path.join(base_path, folder)
walklist['directories'].append(folderpath)
for filename in files:
filepath = os.path.join(base_path, filename)
walklist['files'].append(filepath)
diff['before']['path_content'] = walklist
return diff
#
# States
#
def get_timestamp_for_time(formatted_time, time_format):
if formatted_time == 'preserve':
return None
elif formatted_time == 'now':
return Sentinel
else:
try:
struct = time.strptime(formatted_time, time_format)
struct_time = time.mktime(struct)
except (ValueError, OverflowError) as e:
raise AnsibleModuleError(results={'msg': 'Error while obtaining timestamp for time %s using format %s: %s'
% (formatted_time, time_format, to_native(e, nonstring='simplerepr'))})
return struct_time
def update_timestamp_for_file(path, mtime, atime, diff=None):
b_path = to_bytes(path, errors='surrogate_or_strict')
try:
# When mtime and atime are set to 'now', rely on utime(path, None) which does not require ownership of the file
# https://github.com/ansible/ansible/issues/50943
if mtime is Sentinel and atime is Sentinel:
# It's not exact but we can't rely on os.stat(path).st_mtime after setting os.utime(path, None) as it may
# not be updated. Just use the current time for the diff values
mtime = atime = time.time()
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
set_time = None
else:
# If both parameters are None 'preserve', nothing to do
if mtime is None and atime is None:
return False
previous_mtime = os.stat(b_path).st_mtime
previous_atime = os.stat(b_path).st_atime
if mtime is None:
mtime = previous_mtime
elif mtime is Sentinel:
mtime = time.time()
if atime is None:
atime = previous_atime
elif atime is Sentinel:
atime = time.time()
# If both timestamps are already ok, nothing to do
if mtime == previous_mtime and atime == previous_atime:
return False
set_time = (atime, mtime)
if not module.check_mode:
os.utime(b_path, set_time)
if diff is not None:
if 'before' not in diff:
diff['before'] = {}
if 'after' not in diff:
diff['after'] = {}
if mtime != previous_mtime:
diff['before']['mtime'] = previous_mtime
diff['after']['mtime'] = mtime
if atime != previous_atime:
diff['before']['atime'] = previous_atime
diff['after']['atime'] = atime
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while updating modification or access time: %s'
% to_native(e, nonstring='simplerepr'), 'path': path})
return True
def keep_backward_compatibility_on_timestamps(parameter, state):
if state in ['file', 'hard', 'directory', 'link'] and parameter is None:
return 'preserve'
elif state == 'touch' and parameter is None:
return 'now'
else:
return parameter
def execute_diff_peek(path):
"""Take a guess as to whether a file is a binary file"""
b_path = to_bytes(path, errors='surrogate_or_strict')
appears_binary = False
try:
with open(b_path, 'rb') as f:
head = f.read(8192)
except Exception:
# If we can't read the file, we're okay assuming it's text
pass
else:
if b"\x00" in head:
appears_binary = True
return appears_binary
def ensure_absent(path):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
result = {}
if prev_state != 'absent':
diff = initial_diff(path, 'absent', prev_state)
if not module.check_mode:
if prev_state == 'directory':
try:
shutil.rmtree(b_path, ignore_errors=False)
except Exception as e:
raise AnsibleModuleError(results={'msg': "rmtree failed: %s" % to_native(e)})
else:
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise AnsibleModuleError(results={'msg': "unlinking failed: %s " % to_native(e),
'path': path})
result.update({'path': path, 'changed': True, 'diff': diff, 'state': 'absent'})
else:
result.update({'path': path, 'changed': False, 'state': 'absent'})
return result
def execute_touch(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
changed = False
result = {'dest': path}
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if not module.check_mode:
if prev_state == 'absent':
# Create an empty file if the filename did not already exist
try:
open(b_path, 'wb').close()
changed = True
except (OSError, IOError) as e:
raise AnsibleModuleError(results={'msg': 'Error, could not touch target: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
# Update the attributes on the file
diff = initial_diff(path, 'touch', prev_state)
file_args = module.load_file_common_arguments(module.params)
try:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except SystemExit as e:
if e.code: # this is the exit code passed to sys.exit, not a constant -- pylint: disable=using-constant-test
# We take this to mean that fail_json() was called from
# somewhere in basic.py
if prev_state == 'absent':
# If we just created the file we can safely remove it
os.remove(b_path)
raise
result['changed'] = changed
result['diff'] = diff
return result
def ensure_file_attributes(path, follow, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
if prev_state != 'file':
if follow and prev_state == 'link':
# follow symlink and operate on original
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
prev_state = get_state(b_path)
file_args['path'] = path
if prev_state not in ('file', 'hard'):
# file is not absent and any other state is a conflict
raise AnsibleModuleError(results={'msg': 'file (%s) is %s, cannot continue' % (path, prev_state),
'path': path, 'state': prev_state})
diff = initial_diff(path, 'file', prev_state)
changed = module.set_fs_attributes_if_different(file_args, False, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_directory(path, follow, recurse, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# For followed symlinks, we need to operate on the target of the link
if follow and prev_state == 'link':
b_path = os.path.realpath(b_path)
path = to_native(b_path, errors='strict')
file_args['path'] = path
prev_state = get_state(b_path)
changed = False
diff = initial_diff(path, 'directory', prev_state)
if prev_state == 'absent':
# Create directory and assign permissions to it
if module.check_mode:
return {'path': path, 'changed': True, 'diff': diff}
curpath = ''
try:
# Split the path so we can apply filesystem attributes recursively
# from the root (/) directory for absolute paths or the base path
# of a relative path. We can then walk the appropriate directory
# path to apply attributes.
# Something like mkdir -p with mode applied to all of the newly created directories
for dirname in path.strip('/').split('/'):
curpath = '/'.join([curpath, dirname])
# Remove leading slash if we're creating a relative path
if not os.path.isabs(path):
curpath = curpath.lstrip('/')
b_curpath = to_bytes(curpath, errors='surrogate_or_strict')
if not os.path.exists(b_curpath):
try:
os.mkdir(b_curpath)
changed = True
except OSError as ex:
# Possibly something else created the dir since the os.path.exists
# check above. As long as it's a dir, we don't need to error out.
if not (ex.errno == errno.EEXIST and os.path.isdir(b_curpath)):
raise
tmp_file_args = file_args.copy()
tmp_file_args['path'] = curpath
changed = module.set_fs_attributes_if_different(tmp_file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
except Exception as e:
raise AnsibleModuleError(results={'msg': 'There was an issue creating %s as requested:'
' %s' % (curpath, to_native(e)),
'path': path})
return {'path': path, 'changed': changed, 'diff': diff}
elif prev_state != 'directory':
# We already know prev_state is not 'absent', therefore it exists in some form.
raise AnsibleModuleError(results={'msg': '%s already exists as a %s' % (path, prev_state),
'path': path})
#
# previous state == directory
#
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
if recurse:
changed |= recursive_set_attributes(b_path, follow, file_args, mtime, atime)
return {'path': path, 'changed': changed, 'diff': diff}
def ensure_symlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# source is both the source of a symlink or an informational passing of the src for a template module
# or copy module, even if this module never uses it, it is needed to key off some things
if src is None:
if follow and os.path.exists(b_path):
# use the current target of the link as the source
src = to_native(os.readlink(b_path), errors='strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
if not os.path.islink(b_path) and os.path.isdir(b_path):
relpath = path
else:
b_relpath = os.path.dirname(b_path)
relpath = to_native(b_relpath, errors='strict')
# If src is None that means we are expecting to update an existing link.
if src is None:
absrc = None
else:
absrc = os.path.join(relpath, src)
b_absrc = to_bytes(absrc, errors='surrogate_or_strict')
if not force and src is not None and not os.path.exists(b_absrc):
raise AnsibleModuleError(results={'msg': 'src file does not exist, use "force=yes" if you'
' really want to create the link: %s' % absrc,
'path': path, 'src': src})
if prev_state == 'directory':
if not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
elif os.listdir(b_path):
# refuse to replace a directory that has files in it
raise AnsibleModuleError(results={'msg': 'the directory %s is not empty, refusing to'
' convert it' % path,
'path': path})
elif prev_state in ('file', 'hard') and not force:
raise AnsibleModuleError(results={'msg': 'refusing to convert from %s to symlink for %s'
% (prev_state, path),
'path': path})
diff = initial_diff(path, 'link', prev_state)
changed = False
if prev_state in ('hard', 'file', 'directory', 'absent'):
if src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new symlinks'})
changed = True
elif prev_state == 'link':
if src is not None:
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
os.rmdir(b_path)
os.symlink(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.symlink(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
# Now that we might have created the symlink, get the arguments.
# We need to do it now so we can properly follow the symlink if needed
# because load_file_common_arguments sets 'path' according
# the value of follow and the symlink existence.
file_args = module.load_file_common_arguments(module.params)
# Whenever we create a link to a nonexistent target we know that the nonexistent target
# cannot have any permissions set on it. Skip setting those and emit a warning (the user
# can set follow=False to remove the warning)
if follow and os.path.islink(b_path) and not os.path.exists(file_args['path']):
module.warn('Cannot set fs attributes on a non-existent symlink target. follow should be'
' set to False to avoid this.')
else:
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def ensure_hardlink(path, src, follow, force, timestamps):
b_path = to_bytes(path, errors='surrogate_or_strict')
b_src = to_bytes(src, errors='surrogate_or_strict')
prev_state = get_state(b_path)
file_args = module.load_file_common_arguments(module.params)
mtime = get_timestamp_for_time(timestamps['modification_time'], timestamps['modification_time_format'])
atime = get_timestamp_for_time(timestamps['access_time'], timestamps['access_time_format'])
# src is the source of a hardlink. We require it if we are creating a new hardlink.
# We require path in the argument_spec so we know it is present at this point.
if prev_state != 'hard' and src is None:
raise AnsibleModuleError(results={'msg': 'src is required for creating new hardlinks'})
# Even if the link already exists, if src was specified it needs to exist.
# The inode number will be compared to ensure the link has the correct target.
if src is not None and not os.path.exists(b_src):
raise AnsibleModuleError(results={'msg': 'src does not exist', 'dest': path, 'src': src})
diff = initial_diff(path, 'hard', prev_state)
changed = False
if prev_state == 'absent':
changed = True
elif prev_state == 'link':
b_old_src = os.readlink(b_path)
if b_old_src != b_src:
diff['before']['src'] = to_native(b_old_src, errors='strict')
diff['after']['src'] = src
changed = True
elif prev_state == 'hard':
if src is not None and not os.stat(b_path).st_ino == os.stat(b_src).st_ino:
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, different hard link exists at destination',
'dest': path, 'src': src})
elif prev_state == 'file':
changed = True
if not force:
raise AnsibleModuleError(results={'msg': 'Cannot link, %s exists at destination' % prev_state,
'dest': path, 'src': src})
elif prev_state == 'directory':
changed = True
if os.path.exists(b_path):
if os.stat(b_path).st_ino == os.stat(b_src).st_ino:
return {'path': path, 'changed': False}
elif not force:
raise AnsibleModuleError(results={'msg': 'Cannot link: different hard link exists at destination',
'dest': path, 'src': src})
else:
raise AnsibleModuleError(results={'msg': 'unexpected position reached', 'dest': path, 'src': src})
if changed and not module.check_mode:
if prev_state != 'absent':
# try to replace atomically
b_tmppath = to_bytes(os.path.sep).join(
[os.path.dirname(b_path), to_bytes(".%s.%s.tmp" % (os.getpid(), time.time()))]
)
try:
if prev_state == 'directory':
if os.path.exists(b_path):
try:
os.unlink(b_path)
except OSError as e:
if e.errno != errno.ENOENT: # It may already have been removed
raise
os.link(b_src, b_tmppath)
os.rename(b_tmppath, b_path)
except OSError as e:
if os.path.exists(b_tmppath):
os.unlink(b_tmppath)
raise AnsibleModuleError(results={'msg': 'Error while replacing: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
else:
try:
os.link(b_src, b_path)
except OSError as e:
raise AnsibleModuleError(results={'msg': 'Error while linking: %s'
% to_native(e, nonstring='simplerepr'),
'path': path})
if module.check_mode and not os.path.exists(b_path):
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
changed = module.set_fs_attributes_if_different(file_args, changed, diff, expand=False)
changed |= update_timestamp_for_file(file_args['path'], mtime, atime, diff)
return {'dest': path, 'src': src, 'changed': changed, 'diff': diff}
def check_owner_exists(module, owner):
try:
uid = int(owner)
try:
getpwuid(uid).pw_name
except KeyError:
module.warn('failed to look up user with uid %s. Create user up to this point in real play' % uid)
except ValueError:
try:
getpwnam(owner).pw_uid
except KeyError:
module.warn('failed to look up user %s. Create user up to this point in real play' % owner)
def check_group_exists(module, group):
try:
gid = int(group)
try:
getgrgid(gid).gr_name
except KeyError:
module.warn('failed to look up group with gid %s. Create group up to this point in real play' % gid)
except ValueError:
try:
getgrnam(group).gr_gid
except KeyError:
module.warn('failed to look up group %s. Create group up to this point in real play' % group)
def main():
global module
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', choices=['absent', 'directory', 'file', 'hard', 'link', 'touch']),
path=dict(type='path', required=True, aliases=['dest', 'name']),
_original_basename=dict(type='str'), # Internal use only, for recursive ops
recurse=dict(type='bool', default=False),
force=dict(type='bool', default=False), # Note: Should not be in file_common_args in future
follow=dict(type='bool', default=True), # Note: Different default than file_common_args
_diff_peek=dict(type='bool'), # Internal use only, for internal checks in the action plugins
src=dict(type='path'), # Note: Should not be in file_common_args in future
modification_time=dict(type='str'),
modification_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
access_time=dict(type='str'),
access_time_format=dict(type='str', default='%Y%m%d%H%M.%S'),
),
add_file_common_args=True,
supports_check_mode=True,
)
# When we rewrite basic.py, we will do something similar to this on instantiating an AnsibleModule
sys.excepthook = _ansible_excepthook
additional_parameter_handling(module.params)
params = module.params
state = params['state']
recurse = params['recurse']
force = params['force']
follow = params['follow']
path = params['path']
src = params['src']
if module.check_mode and state != 'absent':
file_args = module.load_file_common_arguments(module.params)
if file_args['owner']:
check_owner_exists(module, file_args['owner'])
if file_args['group']:
check_group_exists(module, file_args['group'])
timestamps = {}
timestamps['modification_time'] = keep_backward_compatibility_on_timestamps(params['modification_time'], state)
timestamps['modification_time_format'] = params['modification_time_format']
timestamps['access_time'] = keep_backward_compatibility_on_timestamps(params['access_time'], state)
timestamps['access_time_format'] = params['access_time_format']
# short-circuit for diff_peek
if params['_diff_peek'] is not None:
appears_binary = execute_diff_peek(to_bytes(path, errors='surrogate_or_strict'))
module.exit_json(path=path, changed=False, appears_binary=appears_binary)
if state == 'file':
result = ensure_file_attributes(path, follow, timestamps)
elif state == 'directory':
result = ensure_directory(path, follow, recurse, timestamps)
elif state == 'link':
result = ensure_symlink(path, src, follow, force, timestamps)
elif state == 'hard':
result = ensure_hardlink(path, src, follow, force, timestamps)
elif state == 'touch':
result = execute_touch(path, follow, timestamps)
elif state == 'absent':
result = ensure_absent(path)
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,095 |
Documentation required for minimal privileges of service account for managed node
|
### Summary
I wanted to view ‘managed node requirements’. My original need is below:
“ A managed node is AFAIK usually accessed via some user account over ssh.
What are the minimal privileges, or any other requirements of that account within that managed node.?
Will have multiple platform implications.
_Originally posted by @pjgoodall in https://github.com/ansible/ansible/issues/79080#issuecomment-1273905891_
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/installation_guide/intro_installation.rst
### Ansible Version
```console (paste below)
$ ansible --version
2.13
```
### Configuration
```console (paste below)
$ ansible-config dump --only-changed
```
### OS / Environment
N/A
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79095
|
https://github.com/ansible/ansible/pull/79189
|
465480f755b7f4bad72090bbb350c1bd993505ae
|
757efa4a457a20731dcdaa38f86a02e14b446ede
| 2022-10-10T23:37:50Z |
python
| 2022-10-24T20:53:43Z |
docs/docsite/rst/installation_guide/intro_installation.rst
|
.. _installation_guide:
.. _intro_installation_guide:
******************
Installing Ansible
******************
Ansible is an agentless automation tool that you install on a single host (referred to as the control node). From the control node, Ansible can manage an entire fleet of machines and other devices (referred to as managed nodes) remotely with SSH, Powershell remoting, and numerous other transports, all from a simple command-line interface with no databases or daemons required.
.. contents::
:local:
.. _control_node_requirements:
Control node requirements
=========================
For your *control* node (the machine that runs Ansible), you can use nearly any UNIX-like machine with Python 3.9 or newer installed. This includes Red Hat, Debian, Ubuntu, macOS, BSDs, and Windows under a `Windows Subsystem for Linux (WSL) distribution <https://docs.microsoft.com/en-us/windows/wsl/about>`_. Windows without WSL is not natively supported as a control node; see `Matt Davis' blog post <http://blog.rolpdog.com/2020/03/why-no-ansible-controller-for-windows.html>`_ for more information.
.. _managed_node_requirements:
Managed node requirements
=========================
The *managed* node (the machine that Ansible is managing) does not require Ansible to be installed, but requires Python 2.7, or Python 3.5 - 3.11 to run Ansible library code.
.. note::
Network modules are an exception and do not require Python on the managed device. See :ref:`network_modules`.
.. _node_requirements_summary:
Node requirement summary
========================
The table below lists the current and historical versions of Python
required on control and managed nodes.
.. list-table::
:header-rows: 1
* - ansible-core Version
- Control node Python
- Managed node Python
* - 2.11
- Python 2.7, Python 3.5 - 3.9 `[†]`_
- Python 2.6 - 2.7, Python 3.5 - 3.9
* - 2.12
- Python 3.8 - 3.10
- Python 2.6 - 2.7, Python 3.5 - 3.10
* - 2.13
- Python 3.8 - 3.10
- Python 2.7, Python 3.5 - 3.10
* - 2.14
- Python 3.9 - 3.11
- Python 2.7, Python 3.5 - 3.11
_`[†]`: Has a soft requirement of Python 3.8 as not packaged for older versions
.. _getting_ansible:
.. _what_version:
Selecting an Ansible package and version to install
====================================================
Ansible's community packages are distributed in two ways: a minimalist language and runtime package called ``ansible-core``, and a much larger "batteries included" package called ``ansible``, which adds a community-curated selection of :ref:`Ansible Collections <collections>` for automating a wide variety of devices. Choose the package that fits your needs; The following instructions use ``ansible``, but you can substitute ``ansible-core`` if you prefer to start with a more minimal package and separately install only the Ansible Collections you require. The ``ansible`` or ``ansible-core`` packages may be available in your operating systems package manager, and you are free to install these packages with your preferred method. These installation instructions only cover the officially supported means of installing the python package with ``pip``.
Installing and upgrading Ansible
================================
Locating Python
---------------
Locate and remember the path to the Python interpreter you wish to use to run Ansible. The following instructions refer to this Python as ``python3``. For example, if you've determined that you want the Python at ``/usr/bin/python3.9`` to be the one that you'll install Ansible under, specify that instead of ``python3``.
Ensuring ``pip`` is available
-----------------------------
To verify whether ``pip`` is already installed for your preferred Python:
.. code-block:: console
$ python3 -m pip -V
If all is well, you should see something like the following:
.. code-block:: console
$ python3 -m pip -V
pip 21.0.1 from /usr/lib/python3.9/site-packages/pip (python 3.9)
If so, ``pip`` is available, and you can move on to the :ref:`next step <pip_install>`.
If you see an error like ``No module named pip``, you'll need to install ``pip`` under your chosen Python interpreter before proceeding. This may mean installing an additional OS package (for example, ``python3-pip``), or installing the latest ``pip`` directly from the Python Packaging Authority by running the following:
.. code-block:: console
$ curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py
$ python3 get-pip.py --user
You may need to perform some additional configuration before you are able to run Ansible. See the Python documentation on `installing to the user site`_ for more information.
.. _installing to the user site: https://packaging.python.org/tutorials/installing-packages/#installing-to-the-user-site
.. _pip_install:
Installing Ansible
------------------
Use ``pip`` in your selected Python environment to install the Ansible package of your choice for the current user:
.. code-block:: console
$ python3 -m pip install --user ansible
Alternately, you can install a specific version of ``ansible-core`` in this Python environment:
.. code-block:: console
$ python3 -m pip install --user ansible-core==2.12.3
.. _pip_upgrade:
Upgrading Ansible
-----------------
To upgrade an existing Ansible installation in this Python environment to the latest released version, simply add ``--upgrade`` to the command above:
.. code-block:: console
$ python3 -m pip install --upgrade --user ansible
Confirming your installation
----------------------------
You can test that Ansible is installed correctly by checking the version:
.. code-block:: console
$ ansible --version
The version displayed by this command is for the associated ``ansible-core`` package that has been installed.
To check the version of the ``ansible`` package that has been installed:
.. code-block:: console
$ python3 -m pip show ansible
.. _development_install:
Installing for development
==========================
If you are testing new features, fixing bugs, or otherwise working with the development team on changes to the core code, you can install and run the source from GitHub.
.. note::
You should only install and run the ``devel`` branch if you are modifying ``ansible-core`` or trying out features under development. This is a rapidly changing source of code and can become unstable at any point.
For more information on getting involved in the Ansible project, see the :ref:`ansible_community_guide`. For more information on creating Ansible modules and Collections, see the :ref:`developer_guide`.
.. _from_pip_devel:
Installing ``devel`` from GitHub with ``pip``
---------------------------------------------
You can install the ``devel`` branch of ``ansible-core`` directly from GitHub with ``pip``:
.. code-block:: console
$ python3 -m pip install --user https://github.com/ansible/ansible/archive/devel.tar.gz
You can replace ``devel`` in the URL mentioned above, with any other branch or tag on GitHub to install older versions of Ansible, tagged alpha or beta versions, and release candidates.
.. _from_source:
Running the ``devel`` branch from a clone
-----------------------------------------
``ansible-core`` is easy to run from source. You do not need ``root`` permissions to use it and there is no software to actually install. No daemons or database setup are required.
#. Clone the ``ansible-core`` repository
.. code-block:: console
$ git clone https://github.com/ansible/ansible.git
$ cd ./ansible
#. Setup the Ansible environment
* Using Bash
.. code-block:: console
$ source ./hacking/env-setup
* Using Fish
.. code-block:: console
$ source ./hacking/env-setup.fish
* To suppress spurious warnings/errors, use ``-q``
.. code-block:: console
$ source ./hacking/env-setup -q
#. Install Python dependencies
.. code-block:: console
$ python3 -m pip install --user -r ./requirements.txt
#. Update the ``devel`` branch of ``ansible-core`` on your local machine
Use pull-with-rebase so any local changes are replayed.
.. code-block:: console
$ git pull --rebase
.. _shell_completion:
Adding Ansible command shell completion
=======================================
You can add shell completion of the Ansible command line utilities by installing an optional dependency called ``argcomplete``. ``argcomplete`` supports bash, and has limited support for zsh and tcsh.
For more information about installation and configuration, see the `argcomplete documentation <https://kislyuk.github.io/argcomplete/>`_.
Installing ``argcomplete``
--------------------------
.. code-block:: console
$ python3 -m pip install --user argcomplete
Configuring ``argcomplete``
---------------------------
There are 2 ways to configure ``argcomplete`` to allow shell completion of the Ansible command line utilities: globally or per command.
Global configuration
^^^^^^^^^^^^^^^^^^^^
Global completion requires bash 4.2.
.. code-block:: console
$ activate-global-python-argcomplete --user
This will write a bash completion file to a user location. Use ``--dest`` to change the location or ``sudo`` to set up the completion globally.
Per command configuration
^^^^^^^^^^^^^^^^^^^^^^^^^
If you do not have bash 4.2, you must register each script independently.
.. code-block:: console
$ eval $(register-python-argcomplete ansible)
$ eval $(register-python-argcomplete ansible-config)
$ eval $(register-python-argcomplete ansible-console)
$ eval $(register-python-argcomplete ansible-doc)
$ eval $(register-python-argcomplete ansible-galaxy)
$ eval $(register-python-argcomplete ansible-inventory)
$ eval $(register-python-argcomplete ansible-playbook)
$ eval $(register-python-argcomplete ansible-pull)
$ eval $(register-python-argcomplete ansible-vault)
You should place the above commands into your shells profile file such as ``~/.profile`` or ``~/.bash_profile``.
Using ``argcomplete`` with zsh or tcsh
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
See the `argcomplete documentation <https://kislyuk.github.io/argcomplete/>`_.
.. seealso::
:ref:`intro_adhoc`
Examples of basic commands
:ref:`working_with_playbooks`
Learning ansible's configuration management language
:ref:`installation_faqs`
Ansible Installation related to FAQs
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,882 |
pylint sanity test crashes for vmware.vmware_rest
|
### Summary
The pylint sanity test crashes for vmware.vmware_rest. The error can be seen in https://3d7932ae3c3d1cd1ac23-1794bc1134f138a3d06a8b52731b06da.ssl.cf1.rackcdn.com/357/83be7ae7fe3768158f0cdfee3198013dcbfd4d69/check/ansible-test-sanity-docker-devel/c998f25/job-output.txt. This has been the case since 4d25233ece21c545254149ffe78291c734076609 was merged; before this commit the test passed, with that commit it no longer does.
Reported by @mariolenz
### Issue Type
Bug Report
### Component Name
ansible-test pylint sanity test
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Install ansible-core from devel branch, and run `ansible-test sanity --docker -v --test pylint` in a checkout of https://github.com/ansible-collections/vmware.vmware_rest
### Expected Results
Sanity tests pass (or at least do not crash).
### Actual Results
```console
2022-09-26 16:13:26.168048 | controller | Checking 12 file(s) in context "collection" with config: /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
2022-09-26 16:13:26.168224 | controller | Run command: /root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2
2022-09-26 16:13:26.540327 | controller | FATAL: Command "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2" returned exit status 1.
2022-09-26 16:13:26.540374 | controller | >>> Standard Error
2022-09-26 16:13:26.540384 | controller | Traceback (most recent call last):
2022-09-26 16:13:26.540392 | controller | File "<frozen runpy>", line 198, in _run_module_as_main
2022-09-26 16:13:26.540399 | controller | File "<frozen runpy>", line 88, in _run_code
2022-09-26 16:13:26.540407 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__main__.py", line 10, in <module>
2022-09-26 16:13:26.540414 | controller | pylint.run_pylint()
2022-09-26 16:13:26.540421 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__init__.py", line 35, in run_pylint
2022-09-26 16:13:26.540428 | controller | PylintRun(argv or sys.argv[1:])
2022-09-26 16:13:26.540474 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/run.py", line 207, in __init__
2022-09-26 16:13:26.540481 | controller | linter.check(args)
2022-09-26 16:13:26.540488 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/pylinter.py", line 666, in check
2022-09-26 16:13:26.540495 | controller | check_parallel(
2022-09-26 16:13:26.540502 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/parallel.py", line 141, in check_parallel
2022-09-26 16:13:26.540510 | controller | jobs, initializer=initializer, initargs=[dill.dumps(linter)]
2022-09-26 16:13:26.540518 | controller | ^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540525 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 364, in dumps
2022-09-26 16:13:26.540532 | controller | dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)
2022-09-26 16:13:26.540539 | controller | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540546 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 336, in dump
2022-09-26 16:13:26.540553 | controller | Pickler(file, protocol, **_kwds).dump(obj)
2022-09-26 16:13:26.540560 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 620, in dump
2022-09-26 16:13:26.540576 | controller | StockPickler.dump(self, obj)
2022-09-26 16:13:26.540584 | controller | File "/usr/lib/python3.11/pickle.py", line 487, in dump
2022-09-26 16:13:26.540591 | controller | self.save(obj)
2022-09-26 16:13:26.540598 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540605 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540612 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540618 | controller | save(state)
2022-09-26 16:13:26.540625 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540632 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540638 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540645 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540652 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540659 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540665 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540672 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540679 | controller | save(v)
2022-09-26 16:13:26.540686 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540693 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540700 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540707 | controller | save(state)
2022-09-26 16:13:26.540713 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540720 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540727 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540734 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540740 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540753 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540760 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540767 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540774 | controller | save(v)
2022-09-26 16:13:26.540780 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540787 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540794 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540801 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540807 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540814 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540821 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540828 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540834 | controller | save(v)
2022-09-26 16:13:26.540841 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540848 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540854 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540861 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540868 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540875 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540882 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540889 | controller | File "/usr/lib/python3.11/pickle.py", line 1003, in _batch_setitems
2022-09-26 16:13:26.540895 | controller | save(v)
2022-09-26 16:13:26.540902 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540909 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540915 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540922 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1963, in save_function
2022-09-26 16:13:26.540932 | controller | _save_with_postproc(pickler, (_create_function, (
2022-09-26 16:13:26.540939 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1140, in _save_with_postproc
2022-09-26 16:13:26.540946 | controller | pickler.save_reduce(*reduction, obj=obj)
2022-09-26 16:13:26.540958 | controller | File "/usr/lib/python3.11/pickle.py", line 692, in save_reduce
2022-09-26 16:13:26.540965 | controller | save(args)
2022-09-26 16:13:26.540972 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540979 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540985 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540992 | controller | File "/usr/lib/python3.11/pickle.py", line 902, in save_tuple
2022-09-26 16:13:26.540999 | controller | save(element)
2022-09-26 16:13:26.541005 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.541012 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.541019 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.541025 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1187, in save_code
2022-09-26 16:13:26.541032 | controller | obj.co_firstlineno, obj.co_lnotab, obj.co_endlinetable,
2022-09-26 16:13:26.541039 | controller | ^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.541046 | controller | AttributeError: 'code' object has no attribute 'co_endlinetable'. Did you mean: 'co_linetable'?[0m
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78882
|
https://github.com/ansible/ansible/pull/79194
|
a76bbb18a5a80cda0d9683677aa8d5cd8a2e6093
|
645b6b858151a67eddcb63a6b5f726072271e6d9
| 2022-09-26T16:51:14Z |
python
| 2022-10-24T22:29:20Z |
changelogs/fragments/ansible-test-pylint-2.15.5.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,882 |
pylint sanity test crashes for vmware.vmware_rest
|
### Summary
The pylint sanity test crashes for vmware.vmware_rest. The error can be seen in https://3d7932ae3c3d1cd1ac23-1794bc1134f138a3d06a8b52731b06da.ssl.cf1.rackcdn.com/357/83be7ae7fe3768158f0cdfee3198013dcbfd4d69/check/ansible-test-sanity-docker-devel/c998f25/job-output.txt. This has been the case since 4d25233ece21c545254149ffe78291c734076609 was merged; before this commit the test passed, with that commit it no longer does.
Reported by @mariolenz
### Issue Type
Bug Report
### Component Name
ansible-test pylint sanity test
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Install ansible-core from devel branch, and run `ansible-test sanity --docker -v --test pylint` in a checkout of https://github.com/ansible-collections/vmware.vmware_rest
### Expected Results
Sanity tests pass (or at least do not crash).
### Actual Results
```console
2022-09-26 16:13:26.168048 | controller | Checking 12 file(s) in context "collection" with config: /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
2022-09-26 16:13:26.168224 | controller | Run command: /root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2
2022-09-26 16:13:26.540327 | controller | FATAL: Command "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2" returned exit status 1.
2022-09-26 16:13:26.540374 | controller | >>> Standard Error
2022-09-26 16:13:26.540384 | controller | Traceback (most recent call last):
2022-09-26 16:13:26.540392 | controller | File "<frozen runpy>", line 198, in _run_module_as_main
2022-09-26 16:13:26.540399 | controller | File "<frozen runpy>", line 88, in _run_code
2022-09-26 16:13:26.540407 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__main__.py", line 10, in <module>
2022-09-26 16:13:26.540414 | controller | pylint.run_pylint()
2022-09-26 16:13:26.540421 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__init__.py", line 35, in run_pylint
2022-09-26 16:13:26.540428 | controller | PylintRun(argv or sys.argv[1:])
2022-09-26 16:13:26.540474 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/run.py", line 207, in __init__
2022-09-26 16:13:26.540481 | controller | linter.check(args)
2022-09-26 16:13:26.540488 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/pylinter.py", line 666, in check
2022-09-26 16:13:26.540495 | controller | check_parallel(
2022-09-26 16:13:26.540502 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/parallel.py", line 141, in check_parallel
2022-09-26 16:13:26.540510 | controller | jobs, initializer=initializer, initargs=[dill.dumps(linter)]
2022-09-26 16:13:26.540518 | controller | ^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540525 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 364, in dumps
2022-09-26 16:13:26.540532 | controller | dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)
2022-09-26 16:13:26.540539 | controller | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540546 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 336, in dump
2022-09-26 16:13:26.540553 | controller | Pickler(file, protocol, **_kwds).dump(obj)
2022-09-26 16:13:26.540560 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 620, in dump
2022-09-26 16:13:26.540576 | controller | StockPickler.dump(self, obj)
2022-09-26 16:13:26.540584 | controller | File "/usr/lib/python3.11/pickle.py", line 487, in dump
2022-09-26 16:13:26.540591 | controller | self.save(obj)
2022-09-26 16:13:26.540598 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540605 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540612 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540618 | controller | save(state)
2022-09-26 16:13:26.540625 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540632 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540638 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540645 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540652 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540659 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540665 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540672 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540679 | controller | save(v)
2022-09-26 16:13:26.540686 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540693 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540700 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540707 | controller | save(state)
2022-09-26 16:13:26.540713 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540720 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540727 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540734 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540740 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540753 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540760 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540767 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540774 | controller | save(v)
2022-09-26 16:13:26.540780 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540787 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540794 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540801 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540807 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540814 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540821 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540828 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540834 | controller | save(v)
2022-09-26 16:13:26.540841 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540848 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540854 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540861 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540868 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540875 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540882 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540889 | controller | File "/usr/lib/python3.11/pickle.py", line 1003, in _batch_setitems
2022-09-26 16:13:26.540895 | controller | save(v)
2022-09-26 16:13:26.540902 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540909 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540915 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540922 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1963, in save_function
2022-09-26 16:13:26.540932 | controller | _save_with_postproc(pickler, (_create_function, (
2022-09-26 16:13:26.540939 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1140, in _save_with_postproc
2022-09-26 16:13:26.540946 | controller | pickler.save_reduce(*reduction, obj=obj)
2022-09-26 16:13:26.540958 | controller | File "/usr/lib/python3.11/pickle.py", line 692, in save_reduce
2022-09-26 16:13:26.540965 | controller | save(args)
2022-09-26 16:13:26.540972 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540979 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540985 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540992 | controller | File "/usr/lib/python3.11/pickle.py", line 902, in save_tuple
2022-09-26 16:13:26.540999 | controller | save(element)
2022-09-26 16:13:26.541005 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.541012 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.541019 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.541025 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1187, in save_code
2022-09-26 16:13:26.541032 | controller | obj.co_firstlineno, obj.co_lnotab, obj.co_endlinetable,
2022-09-26 16:13:26.541039 | controller | ^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.541046 | controller | AttributeError: 'code' object has no attribute 'co_endlinetable'. Did you mean: 'co_linetable'?[0m
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78882
|
https://github.com/ansible/ansible/pull/79194
|
a76bbb18a5a80cda0d9683677aa8d5cd8a2e6093
|
645b6b858151a67eddcb63a6b5f726072271e6d9
| 2022-09-26T16:51:14Z |
python
| 2022-10-24T22:29:20Z |
test/lib/ansible_test/_data/requirements/sanity.pylint.in
|
pylint == 2.15.4 # currently vetted version
pyyaml # needed for collection_detail.py
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,882 |
pylint sanity test crashes for vmware.vmware_rest
|
### Summary
The pylint sanity test crashes for vmware.vmware_rest. The error can be seen in https://3d7932ae3c3d1cd1ac23-1794bc1134f138a3d06a8b52731b06da.ssl.cf1.rackcdn.com/357/83be7ae7fe3768158f0cdfee3198013dcbfd4d69/check/ansible-test-sanity-docker-devel/c998f25/job-output.txt. This has been the case since 4d25233ece21c545254149ffe78291c734076609 was merged; before this commit the test passed, with that commit it no longer does.
Reported by @mariolenz
### Issue Type
Bug Report
### Component Name
ansible-test pylint sanity test
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
Install ansible-core from devel branch, and run `ansible-test sanity --docker -v --test pylint` in a checkout of https://github.com/ansible-collections/vmware.vmware_rest
### Expected Results
Sanity tests pass (or at least do not crash).
### Actual Results
```console
2022-09-26 16:13:26.168048 | controller | Checking 12 file(s) in context "collection" with config: /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg
2022-09-26 16:13:26.168224 | controller | Run command: /root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2
2022-09-26 16:13:26.540327 | controller | FATAL: Command "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/bin/python -m pylint --jobs 0 --reports n --max-line-length 160 --max-complexity 20 --rcfile /root/ansible/test/lib/ansible_test/_util/controller/sanity/pylint/config/collection.cfg --output-format json --load-plugins deprecated,pylint.extensions.mccabe,string_format,unwanted manual/source/conf.py plugins/doc_fragments/__init__.py plugins/doc_fragments/moid.py plugins/lookup/cluster_moid.py plugins/lookup/datacenter_moid.py plugins/lookup/datastore_moid.py plugins/lookup/folder_moid.py plugins/lookup/host_moid.py plugins/lookup/network_moid.py plugins/lookup/resource_pool_moid.py plugins/lookup/vm_moid.py plugins/plugin_utils/lookup.py --collection-name vmware.vmware_rest --collection-version 2.2.1-dev2" returned exit status 1.
2022-09-26 16:13:26.540374 | controller | >>> Standard Error
2022-09-26 16:13:26.540384 | controller | Traceback (most recent call last):
2022-09-26 16:13:26.540392 | controller | File "<frozen runpy>", line 198, in _run_module_as_main
2022-09-26 16:13:26.540399 | controller | File "<frozen runpy>", line 88, in _run_code
2022-09-26 16:13:26.540407 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__main__.py", line 10, in <module>
2022-09-26 16:13:26.540414 | controller | pylint.run_pylint()
2022-09-26 16:13:26.540421 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/__init__.py", line 35, in run_pylint
2022-09-26 16:13:26.540428 | controller | PylintRun(argv or sys.argv[1:])
2022-09-26 16:13:26.540474 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/run.py", line 207, in __init__
2022-09-26 16:13:26.540481 | controller | linter.check(args)
2022-09-26 16:13:26.540488 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/pylinter.py", line 666, in check
2022-09-26 16:13:26.540495 | controller | check_parallel(
2022-09-26 16:13:26.540502 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/pylint/lint/parallel.py", line 141, in check_parallel
2022-09-26 16:13:26.540510 | controller | jobs, initializer=initializer, initargs=[dill.dumps(linter)]
2022-09-26 16:13:26.540518 | controller | ^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540525 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 364, in dumps
2022-09-26 16:13:26.540532 | controller | dump(obj, file, protocol, byref, fmode, recurse, **kwds)#, strictio)
2022-09-26 16:13:26.540539 | controller | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.540546 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 336, in dump
2022-09-26 16:13:26.540553 | controller | Pickler(file, protocol, **_kwds).dump(obj)
2022-09-26 16:13:26.540560 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 620, in dump
2022-09-26 16:13:26.540576 | controller | StockPickler.dump(self, obj)
2022-09-26 16:13:26.540584 | controller | File "/usr/lib/python3.11/pickle.py", line 487, in dump
2022-09-26 16:13:26.540591 | controller | self.save(obj)
2022-09-26 16:13:26.540598 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540605 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540612 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540618 | controller | save(state)
2022-09-26 16:13:26.540625 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540632 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540638 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540645 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540652 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540659 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540665 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540672 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540679 | controller | save(v)
2022-09-26 16:13:26.540686 | controller | File "/usr/lib/python3.11/pickle.py", line 603, in save
2022-09-26 16:13:26.540693 | controller | self.save_reduce(obj=obj, *rv)
2022-09-26 16:13:26.540700 | controller | File "/usr/lib/python3.11/pickle.py", line 717, in save_reduce
2022-09-26 16:13:26.540707 | controller | save(state)
2022-09-26 16:13:26.540713 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540720 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540727 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540734 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540740 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540753 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540760 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540767 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540774 | controller | save(v)
2022-09-26 16:13:26.540780 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540787 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540794 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540801 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540807 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540814 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540821 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540828 | controller | File "/usr/lib/python3.11/pickle.py", line 998, in _batch_setitems
2022-09-26 16:13:26.540834 | controller | save(v)
2022-09-26 16:13:26.540841 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540848 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540854 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540861 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1251, in save_module_dict
2022-09-26 16:13:26.540868 | controller | StockPickler.save_dict(pickler, obj)
2022-09-26 16:13:26.540875 | controller | File "/usr/lib/python3.11/pickle.py", line 972, in save_dict
2022-09-26 16:13:26.540882 | controller | self._batch_setitems(obj.items())
2022-09-26 16:13:26.540889 | controller | File "/usr/lib/python3.11/pickle.py", line 1003, in _batch_setitems
2022-09-26 16:13:26.540895 | controller | save(v)
2022-09-26 16:13:26.540902 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540909 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540915 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540922 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1963, in save_function
2022-09-26 16:13:26.540932 | controller | _save_with_postproc(pickler, (_create_function, (
2022-09-26 16:13:26.540939 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1140, in _save_with_postproc
2022-09-26 16:13:26.540946 | controller | pickler.save_reduce(*reduction, obj=obj)
2022-09-26 16:13:26.540958 | controller | File "/usr/lib/python3.11/pickle.py", line 692, in save_reduce
2022-09-26 16:13:26.540965 | controller | save(args)
2022-09-26 16:13:26.540972 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.540979 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.540985 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.540992 | controller | File "/usr/lib/python3.11/pickle.py", line 902, in save_tuple
2022-09-26 16:13:26.540999 | controller | save(element)
2022-09-26 16:13:26.541005 | controller | File "/usr/lib/python3.11/pickle.py", line 560, in save
2022-09-26 16:13:26.541012 | controller | f(self, obj) # Call unbound method with explicit self
2022-09-26 16:13:26.541019 | controller | ^^^^^^^^^^^^
2022-09-26 16:13:26.541025 | controller | File "/root/.ansible/test/venv/sanity.pylint/3.11/9aabe968/lib/python3.11/site-packages/dill/_dill.py", line 1187, in save_code
2022-09-26 16:13:26.541032 | controller | obj.co_firstlineno, obj.co_lnotab, obj.co_endlinetable,
2022-09-26 16:13:26.541039 | controller | ^^^^^^^^^^^^^^^^^^^
2022-09-26 16:13:26.541046 | controller | AttributeError: 'code' object has no attribute 'co_endlinetable'. Did you mean: 'co_linetable'?[0m
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78882
|
https://github.com/ansible/ansible/pull/79194
|
a76bbb18a5a80cda0d9683677aa8d5cd8a2e6093
|
645b6b858151a67eddcb63a6b5f726072271e6d9
| 2022-09-26T16:51:14Z |
python
| 2022-10-24T22:29:20Z |
test/lib/ansible_test/_data/requirements/sanity.pylint.txt
|
# edit "sanity.pylint.in" and generate with: hacking/update-sanity-requirements.py --test pylint
astroid==2.12.11
dill==0.3.5.1
isort==5.10.1
lazy-object-proxy==1.7.1
mccabe==0.7.0
platformdirs==2.5.2
pylint==2.15.4
PyYAML==6.0
tomli==2.0.1
tomlkit==0.11.5
typing_extensions==4.3.0
wrapt==1.14.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,174 |
"ansible-galaxy collection list ns.col" has return code 0 when collection is not installed
|
### Summary
To reproduce, run these commands where `ns.col` is the name of a collection you do not have installed and the value of `-p` is a valid collection path that exists:
`ansible-galaxy collection list ns.col -p collections/`
`$?`
The output of $? on devel is 0, but since the collection wasn't found it should be 1. The same issue exists for listing specific roles that do not exist.
To fix:
1. [This method](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1494-L1502) needs to return the correct exit code from `execute_list_role` and `execute_list_collection`.
2. `execute_list_collection` and `execute_list_role` will need to be fixed to return 1 if a collection/role was requested and not found.
For example, on [this line ](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1680) add
```python
if not collection_found and collection_name:
return 1
```
3. Add a new test to [this file](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml), similar to [this one](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml#L111-L116) that lists a single collection that does not exist. Then add an assert task following it to test that the command result's rc is 1.
4. Add a `bugfixes` [changelog fragment](https://docs.ansible.com/ansible/latest/community/development_process.html#creating-a-changelog-fragment).
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection list ns.col -p collections/
$?
```
### Expected Results
1
### Actual Results
```console
0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79174
|
https://github.com/ansible/ansible/pull/79195
|
f3f7d442389208ed5b249902c01d7d888f7c0546
|
da3a7618baa500899d11bb9a80863fdb1f80e3f1
| 2022-10-20T13:53:07Z |
python
| 2022-10-25T16:00:56Z |
changelogs/fragments/ansible-galaxy-role-search-rc.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,174 |
"ansible-galaxy collection list ns.col" has return code 0 when collection is not installed
|
### Summary
To reproduce, run these commands where `ns.col` is the name of a collection you do not have installed and the value of `-p` is a valid collection path that exists:
`ansible-galaxy collection list ns.col -p collections/`
`$?`
The output of $? on devel is 0, but since the collection wasn't found it should be 1. The same issue exists for listing specific roles that do not exist.
To fix:
1. [This method](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1494-L1502) needs to return the correct exit code from `execute_list_role` and `execute_list_collection`.
2. `execute_list_collection` and `execute_list_role` will need to be fixed to return 1 if a collection/role was requested and not found.
For example, on [this line ](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1680) add
```python
if not collection_found and collection_name:
return 1
```
3. Add a new test to [this file](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml), similar to [this one](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml#L111-L116) that lists a single collection that does not exist. Then add an assert task following it to test that the command result's rc is 1.
4. Add a `bugfixes` [changelog fragment](https://docs.ansible.com/ansible/latest/community/development_process.html#creating-a-changelog-fragment).
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection list ns.col -p collections/
$?
```
### Expected Results
1
### Actual Results
```console
0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79174
|
https://github.com/ansible/ansible/pull/79195
|
f3f7d442389208ed5b249902c01d7d888f7c0546
|
da3a7618baa500899d11bb9a80863fdb1f80e3f1
| 2022-10-20T13:53:07Z |
python
| 2022-10-25T16:00:56Z |
docs/docsite/rst/porting_guides/porting_guide_core_2.15.rst
|
.. _porting_2.15_guide_core:
*******************************
Ansible-core 2.15 Porting Guide
*******************************
This section discusses the behavioral changes between ``ansible-core`` 2.14 and ``ansible-core`` 2.15.
It is intended to assist in updating your playbooks, plugins and other parts of your Ansible infrastructure so they will work with this version of Ansible.
We suggest you read this page along with `ansible-core Changelog for 2.15 <https://github.com/ansible/ansible/blob/stable-2.15/changelogs/CHANGELOG-v2.15.rst>`_ to understand what updates you may need to make.
This document is part of a collection on porting. The complete list of porting guides can be found at :ref:`porting guides <porting_guides>`.
.. contents:: Topics
Playbook
========
No notable changes
Command Line
============
No notable changes
Deprecated
==========
* Providing a list of dictionaries to ``vars:`` is deprecated in favor of supplying a dictionary.
Instead of:
.. code-block:: yaml
vars:
- var1: foo
- var2: bar
Use:
.. code-block:: yaml
vars:
var1: foo
var2: bar
Modules
=======
No notable changes
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
No notable changes
Plugins
=======
No notable changes
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,174 |
"ansible-galaxy collection list ns.col" has return code 0 when collection is not installed
|
### Summary
To reproduce, run these commands where `ns.col` is the name of a collection you do not have installed and the value of `-p` is a valid collection path that exists:
`ansible-galaxy collection list ns.col -p collections/`
`$?`
The output of $? on devel is 0, but since the collection wasn't found it should be 1. The same issue exists for listing specific roles that do not exist.
To fix:
1. [This method](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1494-L1502) needs to return the correct exit code from `execute_list_role` and `execute_list_collection`.
2. `execute_list_collection` and `execute_list_role` will need to be fixed to return 1 if a collection/role was requested and not found.
For example, on [this line ](https://github.com/ansible/ansible/blob/stable-2.14/lib/ansible/cli/galaxy.py#L1680) add
```python
if not collection_found and collection_name:
return 1
```
3. Add a new test to [this file](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml), similar to [this one](https://github.com/ansible/ansible/blob/stable-2.14/test/integration/targets/ansible-galaxy-collection/tasks/list.yml#L111-L116) that lists a single collection that does not exist. Then add an assert task following it to test that the command result's rc is 1.
4. Add a `bugfixes` [changelog fragment](https://docs.ansible.com/ansible/latest/community/development_process.html#creating-a-changelog-fragment).
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```bash (paste below)
ansible-galaxy collection list ns.col -p collections/
$?
```
### Expected Results
1
### Actual Results
```console
0
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79174
|
https://github.com/ansible/ansible/pull/79195
|
f3f7d442389208ed5b249902c01d7d888f7c0546
|
da3a7618baa500899d11bb9a80863fdb1f80e3f1
| 2022-10-20T13:53:07Z |
python
| 2022-10-25T16:00:56Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
import typing as t
from dataclasses import dataclass
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
# config definition by position: name, required, type
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
('timeout', False, 'int'),
]
# config definition fields
SERVER_ADDITIONAL = {
'v3': {'default': 'False'},
'validate_certs': {'default': True, 'cli': [{'name': 'validate_certs'}]},
'timeout': {'default': '60', 'cli': [{'name': 'timeout'}]},
'token': {'default': None},
}
# override default if the generic is set
if C.GALAXY_IGNORE_CERTS is not None:
SERVER_ADDITIONAL['validate_certs'].update({'default': not C.GALAXY_IGNORE_CERTS})
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': context.CLIARGS['validate_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError(f"{value} is not a valid signature count value")
return value
@dataclass
class RoleDistributionServer:
_api: t.Union[GalaxyAPI, None]
api_servers: list[GalaxyAPI]
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self.lazy_role_api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None)
common.add_argument('--timeout', dest='timeout', type=int,
help="The time to wait for operations against the galaxy server, defaults to 60s.")
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Install collection artifacts (tarballs) without contacting any distribution servers. '
'This does not apply to collections in remote Git repositories or URLs to remote tarballs.'
)
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
r_re = re.compile(r'^(?<!-)-[a-zA-Z]*r[a-zA-Z]*') # -r, -fr
contains_r = bool([a for a in self._raw_args if r_re.match(a)])
role_file_re = re.compile(r'--role-file($|=)') # --role-file foo, --role-file=foo
contains_role_file = bool([a for a in self._raw_args if role_file_re.match(a)])
if self._implicit_role and (contains_r or contains_role_file):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
# ensure we have 'usable' cli option
setattr(options, 'validate_certs', (None if options.ignore_certs is None else not options.ignore_certs))
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
config_def = {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
if key in SERVER_ADDITIONAL:
config_def.update(SERVER_ADDITIONAL[key])
return config_def
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache', 'timeout']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Abuse the 'plugin config' by making 'galaxy_server' a type of plugin
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
# resolve the config created options above with existing config and user options
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
auth_url = server_options.pop('auth_url')
client_id = server_options.pop('client_id')
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
v3 = server_options.pop('v3')
validate_certs = server_options['validate_certs']
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username, server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
# resolve validate_certs
v_config_default = True if C.GALAXY_IGNORE_CERTS is None else not C.GALAXY_IGNORE_CERTS
validate_certs = v_config_default if context.CLIARGS['validate_certs'] is None else context.CLIARGS['validate_certs']
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs,
**galaxy_options
))
# checks api versions once a GalaxyRole makes an api call
# self.api can be used to evaluate the best server immediately
self.lazy_role_api = RoleDistributionServer(None, self.api_servers)
return context.CLIARGS['func']()
@property
def api(self):
return self.lazy_role_api.api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.lazy_role_api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.lazy_role_api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
for root, dirs, files in os.walk(b_obj_path, topdown=True):
for old_dir in dirs:
path = os.path.join(root, old_dir)
shutil.rmtree(path)
for old_file in files:
path = os.path.join(root, old_file)
os.unlink(path)
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.lazy_role_api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
try:
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
except KeyError:
if self._implicit_role:
raise AnsibleError(
'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" '
'instead of "ansible-galaxy install".'
)
raise
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
offline=context.CLIARGS.get('offline', False),
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
# NOTE: the meta file is also required for installing the role, not just dependencies
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata_dependencies + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.lazy_role_api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.lazy_role_api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.lazy_role_api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
if artifacts_manager is not None:
artifacts_manager.require_build_metadata = False
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return 1
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return 0
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,131 |
Implement allow_change_held_packages for remove
|
### Summary
I need to fully purge and then reinstall a number of packages that also have their versions pinned. I was expecting the argument `allow_change_held_packages` to work, but apparently it's only implemented for the `install` side, not the `remove` side.
### Issue Type
Feature Idea
### Component Name
apt
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Remove foo
ansible.builtin.apt:
allow_change_held_packages: yes
state: absent
name:
- foo
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78131
|
https://github.com/ansible/ansible/pull/78203
|
26a477561168cd731c86fb1ceffb0394c81cb0a7
|
e2450d4886c43528ee8a870cc23cac73afdc6144
| 2022-06-23T16:31:53Z |
python
| 2022-11-02T16:02:04Z |
changelogs/fragments/apt-remove_allow-change-held-packages.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,131 |
Implement allow_change_held_packages for remove
|
### Summary
I need to fully purge and then reinstall a number of packages that also have their versions pinned. I was expecting the argument `allow_change_held_packages` to work, but apparently it's only implemented for the `install` side, not the `remove` side.
### Issue Type
Feature Idea
### Component Name
apt
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Remove foo
ansible.builtin.apt:
allow_change_held_packages: yes
state: absent
name:
- foo
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78131
|
https://github.com/ansible/ansible/pull/78203
|
26a477561168cd731c86fb1ceffb0394c81cb0a7
|
e2450d4886c43528ee8a870cc23cac73afdc6144
| 2022-06-23T16:31:53Z |
python
| 2022-11-02T16:02:04Z |
lib/ansible/modules/apt.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like C(foo), or package specifier with version, like C(foo=1.0) or C(foo>=1.0).
Name wildcards (fnmatch) like C(apt*) and version wildcards like C(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. C(latest) ensures that the latest version is installed. C(build-dep) ensures the package build dependencies
are installed. C(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
- Default is not to update the cache.
aliases: [ update-cache ]
type: bool
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if it is older than the I(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets I(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if the module state is set to I(absent).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). C(true) installs recommended packages. C(false) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies C(allow_unauthenticated: yes) and C(allow_downgrade: yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
clean:
description:
- Run the equivalent of C(apt-get clean) to clear out the local repository of retrieved package files. It removes everything but
the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/.
- Can be run as part of the package installation (clean runs before install) or as a separate step.
type: bool
default: 'no'
version_added: "2.13"
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'C(allow_unauthenticated) is only supported with state: I(install)/I(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
allow_downgrade:
description:
- Corresponds to the C(--allow-downgrades) option for I(apt).
- This option enables the named package and version to replace an already installed higher version of that package.
- Note that setting I(allow_downgrade=true) can make this module behave in a non-idempotent way.
- (The task could end up with a set of packages that does not match the complete list of specified packages to install).
aliases: [ allow-downgrade, allow_downgrades, allow-downgrades ]
type: bool
default: 'no'
version_added: "2.12"
allow_change_held_packages:
description:
- Allows changing the version of a package which is on the apt hold list
type: bool
default: 'no'
version_added: '2.13'
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If C(true), remove unused dependency packages for all module states except I(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If C(true), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If C(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If C(true), it is ensured that no packages will be removed or the task will fail.'
- 'C(fail_on_autoremove) is only supported with state except C(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
lock_timeout:
description:
- How many seconds will this action wait to acquire a lock on the apt db.
- Sometimes there is a transitory lock and this will retry at least until timeout is hit.
type: int
default: 60
version_added: "2.12"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- Three of the upgrade modes (C(full), C(safe) and its alias C(true)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- When C(default_release) is used, an implicit priority of 990 is used. This is the same behavior as C(apt-get -t).
- When an exact version is specified, an implicit priority of 1001 is used.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
ansible.builtin.apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
ansible.builtin.apt:
name: foo
update_cache: yes
- name: Remove "foo" package
ansible.builtin.apt:
name: foo
state: absent
- name: Install the package "foo"
ansible.builtin.apt:
name: foo
- name: Install a list of packages
ansible.builtin.apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
ansible.builtin.apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
ansible.builtin.apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install the version '1.18.0' of package "nginx" and allow potential downgrades
ansible.builtin.apt:
name: nginx=1.18.0
state: present
allow_downgrade: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
ansible.builtin.apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
ansible.builtin.apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
ansible.builtin.apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
ansible.builtin.apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
ansible.builtin.apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
ansible.builtin.apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
ansible.builtin.apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
ansible.builtin.apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
ansible.builtin.apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
ansible.builtin.apt:
autoclean: yes
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Run the equivalent of "apt-get clean" as a separate step
apt:
clean: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: |-
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
apache2-bin ...
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import itertools
import os
import random
import re
import shutil
import sys
import tempfile
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native, to_text
from ansible.module_utils.six import PY3, string_types
from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
HAS_PYTHON_APT = False
try:
import apt
import apt.debfile
import apt_pkg
HAS_PYTHON_APT = True
except ImportError:
apt = apt_pkg = None
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = re.split(r'(>?=)', pkgspec, 1)
if len(parts) > 1:
return parts
return parts[0], None, None
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_best_match(pkgname, version_cmp, version, release, cache):
policy = apt_pkg.Policy(cache)
policy.read_pinfile(apt_pkg.config.find_file("Dir::Etc::preferences"))
policy.read_pindir(apt_pkg.config.find_file("Dir::Etc::preferencesparts"))
if release:
# 990 is the priority used in `apt-get -t`
policy.create_pin('Release', pkgname, release, 990)
if version_cmp == "=":
# Installing a specific version from command line overrides all pinning
# We don't mimmic this exactly, but instead set a priority which is higher than all APT built-in pin priorities.
policy.create_pin('Version', pkgname, version, 1001)
pkg = cache[pkgname]
pkgver = policy.get_candidate_ver(pkg)
if not pkgver:
return None
if version_cmp == "=" and not fnmatch.fnmatch(pkgver.ver_str, version):
# Even though we put in a pin policy, it can be ignored if there is no
# possible candidate.
return None
return pkgver.ver_str
def package_status(m, pkgname, version_cmp, version, default_release, cache, state):
"""
:return: A tuple of (installed, installed_version, version_installable, has_files). *installed* indicates whether
the package (regardless of version) is installed. *installed_version* indicates whether the installed package
matches the provided version criteria. *version_installable* provides the latest matching version that can be
installed. In the case of virtual packages where we can't determine an applicable match, True is returned.
*has_files* indicates whether the package has files on the filesystem (even if not installed, meaning a purge is
required).
"""
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
# When this is a virtual package satisfied by only
# one installed package, return the status of the target
# package to avoid requesting re-install
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, installed_version, version_installable, has_files = \
package_status(m, package.name, version_cmp, version, default_release, cache, state='install')
if installed:
return installed, installed_version, version_installable, has_files
# Otherwise return nothing so apt will sort out
# what package to satisfy this with
return False, False, True, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as not installed and let apt-get install deal with it
return False, False, True, False
else:
return False, False, None, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_best = package_best_match(pkgname, version_cmp, version, default_release, cache._cache)
version_is_installed = False
version_installable = None
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
if version_cmp == "=":
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
if version_best and installed_version != version_best and fnmatch.fnmatch(version_best, version):
version_installable = version_best
elif version_cmp == ">=":
version_is_installed = apt_pkg.version_compare(installed_version, version) >= 0
if version_best and installed_version != version_best and apt_pkg.version_compare(version_best, version) >= 0:
version_installable = version_best
else:
version_is_installed = True
if version_best and installed_version != version_best:
version_installable = version_best
else:
version_installable = version_best
return package_is_installed, version_is_installed, version_installable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
if not isinstance(pkgspec_pattern, string_types):
m.fail_json(msg="Invalid type for package name, expected string but got %s" % type(pkgspec_pattern))
pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % to_text(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False, allow_downgrade=False, allow_change_held_packages=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version_cmp, version = package_split(package)
package_names.append(name)
installed, installed_version, version_installable, has_files = package_status(m, name, version_cmp, version, default_release, cache, state='install')
if not installed and only_upgrade:
# only_upgrade upgrades packages that are already installed
# since this package is not installed, skip it
continue
if not installed_version and not version_installable:
status = False
data = dict(msg="no available installation candidate for %s" % package)
return (status, data)
if version_installable and ((not installed and not only_upgrade) or upgrade or not installed_version):
if version_installable is not True:
pkg_list.append("'%s=%s'" % (name, version_installable))
elif version:
pkg_list.append("'%s=%s'" % (name, version))
else:
pkg_list.append("'%s'" % name)
elif installed_version and version_installable and version_cmp == "=":
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# This is legacy behavior, and isn't documented (in fact it does
# things documentations says it shouldn't). It should not be relied
# upon.
pkg_list.append("'%s=%s'" % (name, version))
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
if allow_downgrade:
cmd += " --allow-downgrades"
if allow_change_held_packages:
cmd += " --allow-change-held-packages"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep and not m.check_mode:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(
m, debs, cache, force, fail_on_autoremove, install_recommends,
allow_unauthenticated,
allow_downgrade,
allow_change_held_packages,
dpkg_options,
):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file, cache=apt.Cache())
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if hasattr(apt_pkg, 'get_architectures') and len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check():
if force or ("later version" in pkg._failure_string and allow_downgrade):
pass
else:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version_cmp, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version_cmp, version, None, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -q -y %s %s %s %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, autoremove, check_arg, packages)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def aptclean(m):
clean_rc, clean_out, clean_err = m.run_command(['apt-get', 'clean'])
if m._diff:
clean_diff = parse_diff(clean_out)
else:
clean_diff = {}
if clean_rc:
m.fail_json(msg="apt-get clean failed", stdout=clean_out, rc=clean_rc)
if clean_err:
m.fail_json(msg="apt-get clean failed: %s" % clean_err, stdout=clean_out, rc=clean_rc)
return clean_out, clean_err
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
allow_downgrade=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
allow_downgrade = '--allow-downgrades' if allow_downgrade else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s %s' % (
apt_cmd_path,
dpkg_options,
force_yes,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade,
check_arg,
upgrade_command,
)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
clean=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
allow_downgrade=dict(type='bool', default=False, aliases=['allow-downgrade', 'allow_downgrades', 'allow-downgrades']),
allow_change_held_packages=dict(type='bool', default=False),
lock_timeout=dict(type='int', default=60),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
locale = get_best_parsable_locale(module)
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
LANG=locale,
LC_ALL=locale,
LC_MESSAGES=locale,
LC_CTYPE=locale,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % apt_pkg_name)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % apt_pkg_name)
module.run_command(['apt-get', 'update'], check_rc=True)
# try to install the apt Python binding
module.run_command(['apt-get', 'install', '--no-install-recommends', apt_pkg_name, '-y', '-q'], check_rc=True)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['clean'] is True:
aptclean_stdout, aptclean_stderr = aptclean(module)
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=True,
msg=aptclean_stdout,
stdout=aptclean_stdout,
stderr=aptclean_stderr
)
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
allow_downgrade = p['allow_downgrade']
allow_change_held_packages = p['allow_change_held_packages']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# max times we'll retry
deadline = time.time() + p['lock_timeout']
# keep running on lock issues unless timeout or resolution is hit.
while True:
# Get the cache object, this has 3 retries built in
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
if not module.check_mode:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if module.check_mode or updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(
module,
p['upgrade'],
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
allow_change_held_packages=allow_change_held_packages,
allow_downgrade=allow_downgrade,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(
module,
'yes',
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(module, packages, cache, p['purge'], force=force_yes, dpkg_options=dpkg_options, autoremove=autoremove)
except apt.cache.LockFailedException as lockFailedException:
if time.time() < deadline:
continue
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
# got here w/o exception and/or exit???
module.fail_json(msg='Unexpected code path taken, we really should have exited before, this is a bug')
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,131 |
Implement allow_change_held_packages for remove
|
### Summary
I need to fully purge and then reinstall a number of packages that also have their versions pinned. I was expecting the argument `allow_change_held_packages` to work, but apparently it's only implemented for the `install` side, not the `remove` side.
### Issue Type
Feature Idea
### Component Name
apt
### Additional Information
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: Remove foo
ansible.builtin.apt:
allow_change_held_packages: yes
state: absent
name:
- foo
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78131
|
https://github.com/ansible/ansible/pull/78203
|
26a477561168cd731c86fb1ceffb0394c81cb0a7
|
e2450d4886c43528ee8a870cc23cac73afdc6144
| 2022-06-23T16:31:53Z |
python
| 2022-11-02T16:02:04Z |
test/integration/targets/apt/tasks/apt.yml
|
- name: use Debian mirror
set_fact:
distro_mirror: http://ftp.debian.org/debian
when: ansible_distribution == 'Debian'
- name: use Ubuntu mirror
set_fact:
distro_mirror: http://archive.ubuntu.com/ubuntu
when: ansible_distribution == 'Ubuntu'
# UNINSTALL 'python-apt'
# The `apt` module has the smarts to auto-install `python-apt(3)`. To test, we
# will first uninstall `python-apt`.
- name: uninstall python-apt with apt
apt:
pkg: [python-apt, python3-apt]
state: absent
purge: yes
register: apt_result
# In check mode, auto-install of `python-apt` must fail
- name: test fail uninstall hello without required apt deps in check mode
apt:
pkg: hello
state: absent
purge: yes
register: apt_result
check_mode: yes
ignore_errors: yes
- name: verify fail uninstall hello without required apt deps in check mode
assert:
that:
- apt_result is failed
- '"If run normally this module can auto-install it." in apt_result.msg'
- name: check with dpkg
shell: dpkg -s python-apt python3-apt
register: dpkg_result
ignore_errors: true
# UNINSTALL 'hello'
# With 'python-apt' uninstalled, the first call to 'apt' should install
# python-apt without updating the cache.
- name: uninstall hello with apt and prevent updating the cache
apt:
pkg: hello
state: absent
purge: yes
update_cache: no
register: apt_result
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify uninstall hello with apt and prevent updating the cache
assert:
that:
- "'changed' in apt_result"
- apt_result is not changed
- "dpkg_result.rc == 1"
- name: Test installing fnmatch package
apt:
name:
- hel?o
- he?lo
register: apt_install_fnmatch
- name: Test uninstalling fnmatch package
apt:
name:
- hel?o
- he?lo
state: absent
register: apt_uninstall_fnmatch
- name: verify fnmatch
assert:
that:
- apt_install_fnmatch is changed
- apt_uninstall_fnmatch is changed
- name: Test update_cache 1 (check mode)
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_1_check_mode
check_mode: true
- name: Test update_cache 1
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_1
- name: Test update_cache 2 (check mode)
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_2_check_mode
check_mode: true
- name: Test update_cache 2
apt:
update_cache: true
cache_valid_time: 10
register: apt_update_cache_2
- name: verify update_cache
assert:
that:
- apt_update_cache_1_check_mode is changed
- apt_update_cache_1 is changed
- apt_update_cache_2_check_mode is not changed
- apt_update_cache_2 is not changed
- name: uninstall apt bindings with apt again
apt:
pkg: [python-apt, python3-apt]
state: absent
purge: yes
# UNINSTALL 'hello'
# With 'python-apt' uninstalled, the first call to 'apt' should install
# python-apt.
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
until: apt_result is success
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify uninstallation of hello
assert:
that:
- "'changed' in apt_result"
- apt_result is not changed
- "dpkg_result.rc == 1"
# UNINSTALL AGAIN
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
- name: verify no change on re-uninstall
assert:
that:
- "not apt_result.changed"
# INSTALL
- name: install hello with apt
apt: name=hello state=present
register: apt_result
- name: check hello with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: verify apt module outputs
assert:
that:
- "'changed' in apt_result"
- "'stderr' in apt_result"
- "'stdout' in apt_result"
- "'stdout_lines' in apt_result"
# INSTALL AGAIN
- name: install hello with apt
apt: name=hello state=present
register: apt_result
- name: verify no change on re-install
assert:
that:
- "not apt_result.changed"
# UNINSTALL AGAIN
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
register: apt_result
# INSTALL WITH VERSION WILDCARD
- name: install hello with apt
apt: name=hello=2.* state=present
register: apt_result
- name: check hello with wildcard with dpkg
shell: dpkg-query -l hello
failed_when: False
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: check hello version
shell: dpkg -s hello | grep Version | awk '{print $2}'
register: hello_version
- name: check hello architecture
shell: dpkg -s hello | grep Architecture | awk '{print $2}'
register: hello_architecture
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
# INSTALL WITHOUT REMOVALS
- name: Install hello, that conflicts with hello-traditional
apt:
pkg: hello
state: present
update_cache: no
- name: check hello
shell: dpkg-query -l hello
register: dpkg_result
- name: verify installation of hello
assert:
that:
- "apt_result.changed"
- "dpkg_result.rc == 0"
- name: Try installing hello-traditional, that conflicts with hello
apt:
pkg: hello-traditional
state: present
fail_on_autoremove: yes
ignore_errors: yes
register: apt_result
- name: verify failure of installing hello-traditional, because it is required to remove hello to install.
assert:
that:
- apt_result is failed
- '"Packages need to be removed but remove is disabled." in apt_result.msg'
- name: uninstall hello with apt
apt:
pkg: hello
state: absent
purge: yes
update_cache: no
- name: install deb file
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_initial
- name: install deb file again
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_secondary
- name: verify installation of hello
assert:
that:
- "apt_initial.changed"
- "not apt_secondary.changed"
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
- name: install deb file from URL
apt: deb="{{ distro_mirror }}/pool/main/h/hello/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb"
register: apt_url
- name: verify installation of hello
assert:
that:
- "apt_url.changed"
- name: uninstall hello with apt
apt: pkg=hello state=absent purge=yes
- name: force install of deb
apt: deb="/var/cache/apt/archives/hello_{{ hello_version.stdout }}_{{ hello_architecture.stdout }}.deb" force=true
register: dpkg_force
- name: verify installation of hello
assert:
that:
- "dpkg_force.changed"
# NEGATIVE: upgrade all packages while providing additional packages to install
- name: provide additional packages to install while upgrading all installed packages
apt: pkg=*,test state=latest
ignore_errors: True
register: apt_result
- name: verify failure of upgrade packages and install
assert:
that:
- "not apt_result.changed"
- "apt_result.failed"
- name: autoclean during install
apt: pkg=hello state=present autoclean=yes
- name: undo previous install
apt: pkg=hello state=absent
# https://github.com/ansible/ansible/issues/23155
- name: create a repo file
copy:
dest: /etc/apt/sources.list.d/non-existing.list
content: deb http://ppa.launchpad.net/non-existing trusty main
- name: test for sane error message
apt:
update_cache: yes
register: apt_result
ignore_errors: yes
- name: verify sane error message
assert:
that:
- "'Failed to fetch' in apt_result['msg']"
- "'403' in apt_result['msg']"
- name: Clean up
file:
name: /etc/apt/sources.list.d/non-existing.list
state: absent
# https://github.com/ansible/ansible/issues/28907
- name: Install parent package
apt:
name: libcaca-dev
- name: Install child package
apt:
name: libslang2-dev
- shell: apt-mark showmanual | grep libcaca-dev
ignore_errors: yes
register: parent_output
- name: Check that parent package is marked as installed manually
assert:
that:
- "'libcaca-dev' in parent_output.stdout"
- shell: apt-mark showmanual | grep libslang2-dev
ignore_errors: yes
register: child_output
- name: Check that child package is marked as installed manually
assert:
that:
- "'libslang2-dev' in child_output.stdout"
- name: Clean up
apt:
name: "{{ pkgs }}"
state: absent
vars:
pkgs:
- libcaca-dev
- libslang2-dev
# https://github.com/ansible/ansible/issues/38995
- name: build-dep for a package
apt:
name: tree
state: build-dep
register: apt_result
- name: Check the result
assert:
that:
- apt_result is changed
- name: build-dep for a package (idempotency)
apt:
name: tree
state: build-dep
register: apt_result
- name: Check the result
assert:
that:
- apt_result is not changed
# check policy_rc_d parameter
- name: Install unscd but forbid service start
apt:
name: unscd
policy_rc_d: 101
- name: Stop unscd service
service:
name: unscd
state: stopped
register: service_unscd_stop
- name: unscd service shouldn't have been stopped by previous task
assert:
that: service_unscd_stop is not changed
- name: Uninstall unscd
apt:
name: unscd
policy_rc_d: 101
- name: Create incorrect /usr/sbin/policy-rc.d
copy:
dest: /usr/sbin/policy-rc.d
content: apt integration test
mode: 0755
- name: Install unscd but forbid service start
apt:
name: unscd
policy_rc_d: 101
- name: Stop unscd service
service:
name: unscd
state: stopped
register: service_unscd_stop
- name: unscd service shouldn't have been stopped by previous task
assert:
that: service_unscd_stop is not changed
- name: Create incorrect /usr/sbin/policy-rc.d
copy:
dest: /usr/sbin/policy-rc.d
content: apt integration test
mode: 0755
register: policy_rc_d
- name: Check if /usr/sbin/policy-rc.d was correctly backed-up during unscd install
assert:
that: policy_rc_d is not changed
- name: Delete /usr/sbin/policy-rc.d
file:
path: /usr/sbin/policy-rc.d
state: absent
# https://github.com/ansible/ansible/issues/65325
- name: Download and install old version of hello (to test allow_change_held_packages option)
apt: "deb=https://ci-files.testing.ansible.com/test/integration/targets/dpkg_selections/hello_{{ hello_old_version }}_amd64.deb"
notify:
- remove package hello
- name: Put hello on hold
shell: apt-mark hold hello
- name: Get hold list
shell: apt-mark showhold
register: allow_change_held_packages_hold
- name: Check that the package hello is on the hold list
assert:
that:
- "'hello' in allow_change_held_packages_hold.stdout"
- name: Try updating package to the latest version (allow_change_held_packages=no)
apt:
name: hello
state: latest
ignore_errors: True
register: allow_change_held_packages_failed_update
- name: Get the version of the package
shell: dpkg -s hello | grep Version | awk '{print $2}'
register: allow_change_held_packages_hello_version
- name: Verify that the package was not updated (apt returns with an error)
assert:
that:
- "allow_change_held_packages_failed_update is failed"
- "'--allow-change-held-packages' in allow_change_held_packages_failed_update.stderr"
- "allow_change_held_packages_hello_version.stdout == hello_old_version"
- name: Try updating package to the latest version (allow_change_held_packages=yes)
apt:
name: hello
state: latest
allow_change_held_packages: yes
register: allow_change_held_packages_successful_update
- name: Get the version of the package
shell: dpkg -s hello | grep Version | awk '{print $2}'
register: allow_change_held_packages_hello_version
- name: Verify that the package was updated
assert:
that:
- "allow_change_held_packages_successful_update is changed"
- "allow_change_held_packages_hello_version.stdout != hello_old_version"
- name: Try updating package to the latest version again
apt:
name: hello
state: latest
allow_change_held_packages: yes
register: allow_change_held_packages_no_update
- name: Get the version of the package
shell: dpkg -s hello | grep Version | awk '{print $2}'
register: allow_change_held_packages_hello_version_again
- name: Verify that the package was not updated
assert:
that:
- "allow_change_held_packages_no_update is not changed"
- "allow_change_held_packages_hello_version.stdout == allow_change_held_packages_hello_version_again.stdout"
# Virtual package
- name: Install a virtual package
apt:
package:
- emacs-nox
- yaml-mode # <- the virtual package
state: latest
register: install_virtual_package_result
- name: Check the virtual package install result
assert:
that:
- install_virtual_package_result is changed
- name: Clean up virtual-package install
apt:
package:
- emacs-nox
- elpa-yaml-mode
state: absent
purge: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 72,133 |
Writing inventory plugin was difficult, documentation seems to miss crucial details
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
I struggled while converting an API based dynamic inventory script to an inventory plugin. It seems like the documentation as is assumes knowledge of how the inventory plugin is supposed to work that I couldn't grasp. The missing information was found in this blog by termlen0: https://termlen0.github.io/2019/11/16/observations/
Specifically, I think it was not made clear how to use the various built in methos such as self.inventory.add_host and self.inventory.add_group.
It would also be useful to learn which such methods are even available. There are probably many more that would be useful that I do not know exist.
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
`docs/docsite/rst/dev_guide/developing_inventory.rst`
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.12
config file = /home/oelliott/git/neteng_ansible/ansible/ansible.cfg
configured module search path = ['/home/oelliott/git/neteng_ansible/.venv/lib/python3.8/site-packages/napalm_ansible/modules']
ansible python module location = /home/oelliott/git/neteng_ansible/.venv/lib/python3.8/site-packages/ansible
executable location = /home/oelliott/git/neteng_ansible/.venv/bin/ansible
python version = 3.8.2 (default, Jul 16 2020, 14:00:26) [GCC 9.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_ACTION_PLUGIN_PATH(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = ['/home/oelliott/git/neteng_ansible/.venv/lib/python3.8/site-packages/napalm_ansible/plugins/action']
DEFAULT_CALLBACK_WHITELIST(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = ['profile_tasks']
DEFAULT_FORCE_HANDLERS(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = True
DEFAULT_FORKS(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = 50
DEFAULT_GATHERING(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = explicit
DEFAULT_HASH_BEHAVIOUR(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = merge
DEFAULT_HOST_LIST(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = ['/home/oelliott/git/neteng_ansible/ansible/hosts_auto.yml']
DEFAULT_LOAD_CALLBACK_PLUGINS(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = /home/oelliott/git/neteng_ansible/ansible/log/ansible.log
DEFAULT_MODULE_PATH(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = ['/home/oelliott/git/neteng_ansible/.venv/lib/python3.8/site-packages/napalm_ansible/modules']
DEFAULT_ROLES_PATH(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = ['/home/oelliott/git/neteng_ansible/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = yaml
DEFAULT_VAULT_PASSWORD_FILE(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = /home/oelliott/git/neteng_ansible/ansible/.vault
DISPLAY_SKIPPED_HOSTS(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = False
PERSISTENT_COMMAND_TIMEOUT(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = 240
RETRY_FILES_ENABLED(/home/oelliott/git/neteng_ansible/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Ubuntu 20.04.1
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Would make it easier for those not so familiar with Ansible or python generally to write their own inventory plugins.
<!--- HINT: You can paste gist.github.com links for larger files -->
|
https://github.com/ansible/ansible/issues/72133
|
https://github.com/ansible/ansible/pull/79288
|
938c0fa944cabdc1a21745abade7f05ac3e6ee26
|
a12a9b409f964a911c7348e85035475fd6eab0b4
| 2020-10-07T08:33:31Z |
python
| 2022-11-03T19:05:33Z |
docs/docsite/rst/dev_guide/developing_inventory.rst
|
.. _developing_inventory:
****************************
Developing dynamic inventory
****************************
Ansible can pull inventory information from dynamic sources, including cloud sources, by using the supplied :ref:`inventory plugins <inventory_plugins>`. For details about how to pull inventory information, see :ref:`dynamic_inventory`. If the source you want is not currently covered by existing plugins, you can create your own inventory plugin as with any other plugin type.
In previous versions, you had to create a script or program that could output JSON in the correct format when invoked with the proper arguments.
You can still use and write inventory scripts, as we ensured backwards compatibility through the :ref:`script inventory plugin <script_inventory>`
and there is no restriction on the programming language used.
If you choose to write a script, however, you will need to implement some features yourself such as caching, configuration management, dynamic variable and group composition, and so on.
If you use :ref:`inventory plugins <inventory_plugins>` instead, you can use the Ansible codebase and add these common features automatically.
.. contents:: Topics
:local:
.. _inventory_sources:
Inventory sources
=================
Inventory sources are the input strings that inventory plugins work with.
An inventory source can be a path to a file or to a script, or it can be raw data that the plugin can interpret.
The table below shows some examples of inventory plugins and the source types that you can pass to them with ``-i`` on the command line.
+--------------------------------------------+-----------------------------------------+
| Plugin | Source |
+--------------------------------------------+-----------------------------------------+
| :ref:`host list <host_list_inventory>` | A comma-separated list of hosts |
+--------------------------------------------+-----------------------------------------+
| :ref:`yaml <yaml_inventory>` | Path to a YAML format data file |
+--------------------------------------------+-----------------------------------------+
| :ref:`constructed <constructed_inventory>` | Path to a YAML configuration file |
+--------------------------------------------+-----------------------------------------+
| :ref:`ini <ini_inventory>` | Path to an INI formatted data file |
+--------------------------------------------+-----------------------------------------+
| :ref:`virtualbox <virtualbox_inventory>` | Path to a YAML configuration file |
+--------------------------------------------+-----------------------------------------+
| :ref:`script plugin <script_inventory>` | Path to an executable that outputs JSON |
+--------------------------------------------+-----------------------------------------+
.. _developing_inventory_inventory_plugins:
Inventory plugins
=================
Like most plugin types (except modules), inventory plugins must be developed in Python. They execute on the controller and should therefore adhere to the :ref:`control_node_requirements`.
Most of the documentation in :ref:`developing_plugins` also applies here. You should read that document first for a general understanding and then come back to this document for specifics on inventory plugins.
Normally, inventory plugins are executed at the start of a run, and before the playbooks, plays, or roles are loaded.
However, you can use the ``meta: refresh_inventory`` task to clear the current inventory and execute the inventory plugins again, and this task will generate a new inventory.
If you use the persistent cache, inventory plugins can also use the configured cache plugin to store and retrieve data. Caching inventory avoids making repeated and costly external calls.
.. _developing_an_inventory_plugin:
Developing an inventory plugin
------------------------------
The first thing you want to do is use the base class:
.. code-block:: python
from ansible.plugins.inventory import BaseInventoryPlugin
class InventoryModule(BaseInventoryPlugin):
NAME = 'myplugin' # used internally by Ansible, it should match the file name but not required
If the inventory plugin is in a collection, the NAME should be in the 'namespace.collection_name.myplugin' format. The base class has a couple of methods that each plugin should implement and a few helpers for parsing the inventory source and updating the inventory.
After you have the basic plugin working, you can incorporate other features by adding more base classes:
.. code-block:: python
from ansible.plugins.inventory import BaseInventoryPlugin, Constructable, Cacheable
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'myplugin'
For the bulk of the work in a plugin, we mostly want to deal with 2 methods ``verify_file`` and ``parse``.
.. _inventory_plugin_verify_file:
verify_file method
^^^^^^^^^^^^^^^^^^
Ansible uses this method to quickly determine if the inventory source is usable by the plugin. The determination does not need to be 100% accurate, as there might be an overlap in what plugins can handle and by default Ansible will try the enabled plugins as per their sequence.
.. code-block:: python
def verify_file(self, path):
''' return true/false if this is possibly a valid file for this plugin to consume '''
valid = False
if super(InventoryModule, self).verify_file(path):
# base class verifies that file exists and is readable by current user
if path.endswith(('virtualbox.yaml', 'virtualbox.yml', 'vbox.yaml', 'vbox.yml')):
valid = True
return valid
In the above example, from the :ref:`virtualbox inventory plugin <virtualbox_inventory>`, we screen for specific file name patterns to avoid attempting to consume any valid YAML file. You can add any type of condition here, but the most common one is 'extension matching'. If you implement extension matching for YAML configuration files, the path suffix <plugin_name>.<yml|yaml> should be accepted. All valid extensions should be documented in the plugin description.
The following is another example that does not use a 'file' but the inventory source string itself,
from the :ref:`host list <host_list_inventory>` plugin:
.. code-block:: python
def verify_file(self, path):
''' don't call base class as we don't expect a path, but a host list '''
host_list = path
valid = False
b_path = to_bytes(host_list, errors='surrogate_or_strict')
if not os.path.exists(b_path) and ',' in host_list:
# the path does NOT exist and there is a comma to indicate this is a 'host list'
valid = True
return valid
This method is just to expedite the inventory process and avoid unnecessary parsing of sources that are easy to filter out before causing a parse error.
.. _inventory_plugin_parse:
parse method
^^^^^^^^^^^^
This method does the bulk of the work in the plugin.
It takes the following parameters:
* inventory: inventory object with existing data and the methods to add hosts/groups/variables to inventory
* loader: Ansible's DataLoader. The DataLoader can read files, auto load JSON/YAML and decrypt vaulted data, and cache read files.
* path: string with inventory source (this is usually a path, but is not required)
* cache: indicates whether the plugin should use or avoid caches (cache plugin and/or loader)
The base class does some minimal assignment for reuse in other methods.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
self.loader = loader
self.inventory = inventory
self.templar = Templar(loader=loader)
It is up to the plugin now to parse the provided inventory source and translate it into Ansible inventory.
To facilitate this, the example below uses a few helper functions:
.. code-block:: python
NAME = 'myplugin'
def parse(self, inventory, loader, path, cache=True):
# call base method to ensure properties are available for use with other helper methods
super(InventoryModule, self).parse(inventory, loader, path, cache)
# this method will parse 'common format' inventory sources and
# update any options declared in DOCUMENTATION as needed
config = self._read_config_data(path)
# if NOT using _read_config_data you should call set_options directly,
# to process any defined configuration for this plugin,
# if you don't define any options you can skip
#self.set_options()
# example consuming options from inventory source
mysession = apilib.session(user=self.get_option('api_user'),
password=self.get_option('api_pass'),
server=self.get_option('api_server')
)
# make requests to get data to feed into inventory
mydata = mysession.getitall()
#parse data and create inventory objects:
for colo in mydata:
for server in mydata[colo]['servers']:
self.inventory.add_host(server['name'])
self.inventory.set_variable(server['name'], 'ansible_host', server['external_ip'])
The specifics will vary depending on API and structure returned. Remember that if you get an inventory source error or any other issue, you should ``raise AnsibleParserError`` to let Ansible know that the source was invalid or the process failed.
For examples on how to implement an inventory plugin, see the source code here:
`lib/ansible/plugins/inventory <https://github.com/ansible/ansible/tree/devel/lib/ansible/plugins/inventory>`_.
.. _inventory_plugin_caching:
inventory cache
^^^^^^^^^^^^^^^
To cache the inventory, extend the inventory plugin documentation with the inventory_cache documentation fragment and use the Cacheable base class.
.. code-block:: yaml
extends_documentation_fragment:
- inventory_cache
.. code-block:: python
class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
NAME = 'myplugin'
Next, load the cache plugin specified by the user to read from and update the cache. If your inventory plugin uses YAML-based configuration files and the ``_read_config_data`` method, the cache plugin is loaded within that method. If your inventory plugin does not use ``_read_config_data``, you must load the cache explicitly with ``load_cache_plugin``.
.. code-block:: python
NAME = 'myplugin'
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
Before using the cache plugin, you must retrieve a unique cache key by using the ``get_cache_key`` method. This task needs to be done by all inventory modules using the cache, so that you don't use/overwrite other parts of the cache.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
cache_key = self.get_cache_key(path)
Now that you've enabled caching, loaded the correct plugin, and retrieved a unique cache key, you can set up the flow of data between the cache and your inventory using the ``cache`` parameter of the ``parse`` method. This value comes from the inventory manager and indicates whether the inventory is being refreshed (such as by the ``--flush-cache`` or the meta task ``refresh_inventory``). Although the cache shouldn't be used to populate the inventory when being refreshed, the cache should be updated with the new inventory if the user has enabled caching. You can use ``self._cache`` like a dictionary. The following pattern allows refreshing the inventory to work in conjunction with caching.
.. code-block:: python
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self.load_cache_plugin()
cache_key = self.get_cache_key(path)
# cache may be True or False at this point to indicate if the inventory is being refreshed
# get the user's cache option too to see if we should save the cache if it is changing
user_cache_setting = self.get_option('cache')
# read if the user has caching enabled and the cache isn't being refreshed
attempt_to_read_cache = user_cache_setting and cache
# update if the user has caching enabled and the cache is being refreshed; update this value to True if the cache has expired below
cache_needs_update = user_cache_setting and not cache
# attempt to read the cache if inventory isn't being refreshed and the user has caching enabled
if attempt_to_read_cache:
try:
results = self._cache[cache_key]
except KeyError:
# This occurs if the cache_key is not in the cache or if the cache_key expired, so the cache needs to be updated
cache_needs_update = True
if not attempt_to_read_cache or cache_needs_update:
# parse the provided inventory source
results = self.get_inventory()
if cache_needs_update:
self._cache[cache_key] = results
# submit the parsed data to the inventory object (add_host, set_variable, etc)
self.populate(results)
After the ``parse`` method is complete, the contents of ``self._cache`` is used to set the cache plugin if the contents of the cache have changed.
You have three other cache methods available:
- ``set_cache_plugin`` forces the cache plugin to be set with the contents of ``self._cache``, before the ``parse`` method completes
- ``update_cache_if_changed`` sets the cache plugin only if ``self._cache`` has been modified, before the ``parse`` method completes
- ``clear_cache`` flushes the cache, ultimately by calling the cache plugin's ``flush()`` method, whose implementation is dependent upon the particular cache plugin in use. Note that if the user is using the same cache backend for facts and inventory, both will get flushed. To avoid this, the user can specify a distinct cache backend in their inventory plugin configuration.
constructed features
^^^^^^^^^^^^^^^^^^^^
Inventory plugins can create host variables and groups from Jinja2 expressions and variables by using features from the ``constructed`` inventory plugin. To do this, use the ``Constructable`` base class and extend the inventory plugin's documentation with the ``constructed`` documentation fragment.
.. code-block:: yaml
extends_documentation_fragment:
- constructed
.. code-block:: python
class InventoryModule(BaseInventoryPlugin, Constructable):
NAME = 'ns.coll.myplugin'
The three main options from the ``constructed`` documentation fragment are ``compose``, ``keyed_groups``, and ``groups``. See the ``constructed`` inventory plugin for examples on using these. ``compose`` is a dictionary of variable names and Jinja2 expressions. Once a host is added to inventory and any initial variables have been set, call the method ``_set_composite_vars`` to add composed host variables. If this is done before adding ``keyed_groups`` and ``groups``, the group generation will be able to use the composed variables.
.. code-block:: python
def add_host(self, hostname, host_vars):
self.inventory.add_host(hostname, group='all')
for var_name, var_value in host_vars.items():
self.inventory.set_variable(hostname, var_name, var_value)
# Determines if composed variables or groups using nonexistent variables is an error
strict = self.get_option('strict')
# Add variables created by the user's Jinja2 expressions to the host
self._set_composite_vars(self.get_option('compose'), host_vars, hostname, strict=True)
# The following two methods combine the provided variables dictionary with the latest host variables
# Using these methods after _set_composite_vars() allows groups to be created with the composed variables
self._add_host_to_composed_groups(self.get_option('groups'), host_vars, hostname, strict=strict)
self._add_host_to_keyed_groups(self.get_option('keyed_groups'), host_vars, hostname, strict=strict)
By default, group names created with ``_add_host_to_composed_groups()`` and ``_add_host_to_keyed_groups()`` are valid Python identifiers. Invalid characters are replaced with an underscore ``_``. A plugin can change the sanitization used for the constructed features by setting ``self._sanitize_group_name`` to a new function. The core engine also does sanitization, so if the custom function is less strict it should be used in conjunction with the configuration setting ``TRANSFORM_INVALID_GROUP_CHARS``.
.. code-block:: python
from ansible.inventory.group import to_safe_group_name
class InventoryModule(BaseInventoryPlugin, Constructable):
NAME = 'ns.coll.myplugin'
@staticmethod
def custom_sanitizer(name):
return to_safe_group_name(name, replacer='')
def parse(self, inventory, loader, path, cache=True):
super(InventoryModule, self).parse(inventory, loader, path)
self._sanitize_group_name = custom_sanitizer
.. _inventory_source_common_format:
Common format for inventory sources
-----------------------------------
To simplify development, most plugins use a standard YAML-based configuration file as the inventory source. The file has only one required field ``plugin``, which should contain the name of the plugin that is expected to consume the file.
Depending on other common features used, you might need other fields, and you can add custom options in each plugin as required.
For example, if you use the integrated caching, ``cache_plugin``, ``cache_timeout`` and other cache-related fields could be present.
.. _inventory_development_auto:
The 'auto' plugin
-----------------
From Ansible 2.5 onwards, we include the :ref:`auto inventory plugin <auto_inventory>` and enable it by default. If the ``plugin`` field in your standard configuration file matches the name of your inventory plugin, the ``auto`` inventory plugin will load your plugin. The 'auto' plugin makes it easier to use your plugin without having to update configurations.
.. _inventory_scripts:
.. _developing_inventory_scripts:
Inventory scripts
=================
Even though we now have inventory plugins, we still support inventory scripts, not only for backwards compatibility but also to allow users to use other programming languages.
.. _inventory_script_conventions:
Inventory script conventions
----------------------------
Inventory scripts must accept the ``--list`` and ``--host <hostname>`` arguments. Although other arguments are allowed, Ansible will not use them.
Such arguments might still be useful for executing the scripts directly.
When the script is called with the single argument ``--list``, the script must output to stdout a JSON object that contains all the groups to be managed. Each group's value should be either an object containing a list of each host, any child groups, and potential group variables, or simply a list of hosts:
.. code-block:: json
{
"group001": {
"hosts": ["host001", "host002"],
"vars": {
"var1": true
},
"children": ["group002"]
},
"group002": {
"hosts": ["host003","host004"],
"vars": {
"var2": 500
},
"children":[]
}
}
If any of the elements of a group are empty, they may be omitted from the output.
When called with the argument ``--host <hostname>`` (where <hostname> is a host from above), the script must print a JSON object, either empty or containing variables to make them available to templates and playbooks. For example:
.. code-block:: json
{
"VAR001": "VALUE",
"VAR002": "VALUE"
}
Printing variables is optional. If the script does not print variables, it should print an empty JSON object.
.. _inventory_script_tuning:
Tuning the external inventory script
------------------------------------
.. versionadded:: 1.3
The stock inventory script system mentioned above works for all versions of Ansible, but calling ``--host`` for every host can be rather inefficient, especially if it involves API calls to a remote subsystem.
To avoid this inefficiency, if the inventory script returns a top-level element called "_meta", it is possible to return all the host variables in a single script execution. When this meta element contains a value for "hostvars", the inventory script will not be invoked with ``--host`` for each host. This behavior results in a significant performance increase for large numbers of hosts.
The data to be added to the top-level JSON object looks like this:
.. code-block:: text
{
# results of inventory script as above go here
# ...
"_meta": {
"hostvars": {
"host001": {
"var001" : "value"
},
"host002": {
"var002": "value"
}
}
}
}
To satisfy the requirements of using ``_meta``, to prevent ansible from calling your inventory with ``--host`` you must at least populate ``_meta`` with an empty ``hostvars`` object.
For example:
.. code-block:: text
{
# results of inventory script as above go here
# ...
"_meta": {
"hostvars": {}
}
}
.. _replacing_inventory_ini_with_dynamic_provider:
If you intend to replace an existing static inventory file with an inventory script, it must return a JSON object which contains an 'all' group that includes every host in the inventory as a member and every group in the inventory as a child. It should also include an 'ungrouped' group which contains all hosts which are not members of any other group.
A skeleton example of this JSON object is:
.. code-block:: json
{
"_meta": {
"hostvars": {}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"children": [
]
}
}
An easy way to see how this should look is using :ref:`ansible-inventory`, which also supports ``--list`` and ``--host`` parameters like an inventory script would.
.. seealso::
:ref:`developing_api`
Python API to Playbooks and Ad Hoc Task Execution
:ref:`developing_modules_general`
Get started with developing a module
:ref:`developing_plugins`
How to develop plugins
`AWX <https://github.com/ansible/awx>`_
REST API endpoint and GUI for Ansible, syncs with dynamic inventory
`Development Mailing List <https://groups.google.com/group/ansible-devel>`_
Mailing list for development topics
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 76,823 |
[blockinfile]: multiple blocks get generated, if you add "\n" to marker
|
### Summary
we config the domain info with the `/etc/hosts` file, we monitor our service and change the domain with bash script like this:
```
ansible 127.0.0.1 -m blockinfile -a "block='192.168.1.1 test.company.com' marker='# {mark} : test block' path=/etc/hosts"
```
BUT , occasionally the file `/etc/hosts` contains multiple block. I expected that there should be only one block.
I have read the source code `lib/ansible/modules/files/blockinfile.py` and find nothing suspicious.
### Issue Type
~Bug Report~
- Docs Report
### Component Name
blockinfile
### Ansible Version
```console
$ ansible --version
ansible 2.7.5.post0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 7 2019, 00:51:29) [GCC 4.8.5 20150623 (Red Hat 4.8.5-39)]
```
### Configuration
```console
$ ansible-config dump --only-changed
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
```
### OS / Environment
CentOS-7.6
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
ansible 127.0.0.1 -m blockinfile -a "block='192.168.1.1 test.company.com' marker='# {mark} : test block' path=/etc/hosts"
```
### Expected Results
```console
# BEGIN : test block
192.168.1.1 test.company.com
# END : test block
```
### Actual Results
```console
# BEGIN : test block
192.168.1.1 test.company.com
# END : test block
# BEGIN : test block
192.168.1.2 test.company.com
# END : test block
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/76823
|
https://github.com/ansible/ansible/pull/79292
|
a12a9b409f964a911c7348e85035475fd6eab0b4
|
f700047e69a03917194d1ec9f6d73577013362ce
| 2022-01-24T06:36:57Z |
python
| 2022-11-03T19:10:45Z |
lib/ansible/modules/blockinfile.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2014, 2015 YAEGASHI Takeshi <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: blockinfile
short_description: Insert/update/remove a text block surrounded by marker lines
version_added: '2.0'
description:
- This module will insert/update/remove a block of multi-line text surrounded by customizable marker lines.
author:
- Yaegashi Takeshi (@yaegashi)
options:
path:
description:
- The file to modify.
- Before Ansible 2.3 this option was only usable as I(dest), I(destfile) and I(name).
type: path
required: yes
aliases: [ dest, destfile, name ]
state:
description:
- Whether the block should be there or not.
type: str
choices: [ absent, present ]
default: present
marker:
description:
- The marker line template.
- C({mark}) will be replaced with the values in C(marker_begin) (default="BEGIN") and C(marker_end) (default="END").
- Using a custom marker without the C({mark}) variable may result in the block being repeatedly inserted on subsequent playbook runs.
type: str
default: '# {mark} ANSIBLE MANAGED BLOCK'
block:
description:
- The text to insert inside the marker lines.
- If it is missing or an empty string, the block will be removed as if C(state) were specified to C(absent).
type: str
default: ''
aliases: [ content ]
insertafter:
description:
- If specified and no begin/ending C(marker) lines are found, the block will be inserted after the last match of specified regular expression.
- A special value is available; C(EOF) for inserting the block at the end of the file.
- If specified regular expression has no matches, C(EOF) will be used instead.
- The presence of the multiline flag (?m) in the regular expression controls whether the match is done line by line or with multiple lines.
This behaviour was added in ansible-core 2.14.
type: str
choices: [ EOF, '*regex*' ]
default: EOF
insertbefore:
description:
- If specified and no begin/ending C(marker) lines are found, the block will be inserted before the last match of specified regular expression.
- A special value is available; C(BOF) for inserting the block at the beginning of the file.
- If specified regular expression has no matches, the block will be inserted at the end of the file.
- The presence of the multiline flag (?m) in the regular expression controls whether the match is done line by line or with multiple lines.
This behaviour was added in ansible-core 2.14.
type: str
choices: [ BOF, '*regex*' ]
create:
description:
- Create a new file if it does not exist.
type: bool
default: no
backup:
description:
- Create a backup file including the timestamp information so you can
get the original file back if you somehow clobbered it incorrectly.
type: bool
default: no
marker_begin:
description:
- This will be inserted at C({mark}) in the opening ansible block marker.
type: str
default: BEGIN
version_added: '2.5'
marker_end:
required: false
description:
- This will be inserted at C({mark}) in the closing ansible block marker.
type: str
default: END
version_added: '2.5'
notes:
- When using 'with_*' loops be aware that if you do not set a unique mark the block will be overwritten on each iteration.
- As of Ansible 2.3, the I(dest) option has been changed to I(path) as default, but I(dest) still works as well.
- Option I(follow) has been removed in Ansible 2.5, because this module modifies the contents of the file so I(follow=no) doesn't make sense.
- When more then one block should be handled in one file you must change the I(marker) per task.
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.files
- files
- validate
attributes:
check_mode:
support: full
diff_mode:
support: full
safe_file_operations:
support: full
platform:
support: full
platforms: posix
vault:
support: none
'''
EXAMPLES = r'''
# Before Ansible 2.3, option 'dest' or 'name' was used instead of 'path'
- name: Insert/Update "Match User" configuration block in /etc/ssh/sshd_config
ansible.builtin.blockinfile:
path: /etc/ssh/sshd_config
block: |
Match User ansible-agent
PasswordAuthentication no
- name: Insert/Update eth0 configuration stanza in /etc/network/interfaces
(it might be better to copy files into /etc/network/interfaces.d/)
ansible.builtin.blockinfile:
path: /etc/network/interfaces
block: |
iface eth0 inet static
address 192.0.2.23
netmask 255.255.255.0
- name: Insert/Update configuration using a local file and validate it
ansible.builtin.blockinfile:
block: "{{ lookup('ansible.builtin.file', './local/sshd_config') }}"
path: /etc/ssh/sshd_config
backup: yes
validate: /usr/sbin/sshd -T -f %s
- name: Insert/Update HTML surrounded by custom markers after <body> line
ansible.builtin.blockinfile:
path: /var/www/html/index.html
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
insertafter: "<body>"
block: |
<h1>Welcome to {{ ansible_hostname }}</h1>
<p>Last updated on {{ ansible_date_time.iso8601 }}</p>
- name: Remove HTML as well as surrounding markers
ansible.builtin.blockinfile:
path: /var/www/html/index.html
marker: "<!-- {mark} ANSIBLE MANAGED BLOCK -->"
block: ""
- name: Add mappings to /etc/hosts
ansible.builtin.blockinfile:
path: /etc/hosts
block: |
{{ item.ip }} {{ item.name }}
marker: "# {mark} ANSIBLE MANAGED BLOCK {{ item.name }}"
loop:
- { name: host1, ip: 10.10.1.10 }
- { name: host2, ip: 10.10.1.11 }
- { name: host3, ip: 10.10.1.12 }
- name: Search with a multiline search flags regex and if found insert after
blockinfile:
path: listener.ora
block: "{{ listener_line | indent(width=8, first=True) }}"
insertafter: '(?m)SID_LIST_LISTENER_DG =\n.*\(SID_LIST ='
marker: " <!-- {mark} ANSIBLE MANAGED BLOCK -->"
'''
import re
import os
import tempfile
from ansible.module_utils.six import b
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils._text import to_bytes, to_native
def write_changes(module, contents, path):
tmpfd, tmpfile = tempfile.mkstemp(dir=module.tmpdir)
f = os.fdopen(tmpfd, 'wb')
f.write(contents)
f.close()
validate = module.params.get('validate', None)
valid = not validate
if validate:
if "%s" not in validate:
module.fail_json(msg="validate must contain %%s: %s" % (validate))
(rc, out, err) = module.run_command(validate % tmpfile)
valid = rc == 0
if rc != 0:
module.fail_json(msg='failed to validate: '
'rc:%s error:%s' % (rc, err))
if valid:
module.atomic_move(tmpfile, path, unsafe_writes=module.params['unsafe_writes'])
def check_file_attrs(module, changed, message, diff):
file_args = module.load_file_common_arguments(module.params)
if module.set_file_attributes_if_different(file_args, False, diff=diff):
if changed:
message += " and "
changed = True
message += "ownership, perms or SE linux context changed"
return message, changed
def main():
module = AnsibleModule(
argument_spec=dict(
path=dict(type='path', required=True, aliases=['dest', 'destfile', 'name']),
state=dict(type='str', default='present', choices=['absent', 'present']),
marker=dict(type='str', default='# {mark} ANSIBLE MANAGED BLOCK'),
block=dict(type='str', default='', aliases=['content']),
insertafter=dict(type='str'),
insertbefore=dict(type='str'),
create=dict(type='bool', default=False),
backup=dict(type='bool', default=False),
validate=dict(type='str'),
marker_begin=dict(type='str', default='BEGIN'),
marker_end=dict(type='str', default='END'),
),
mutually_exclusive=[['insertbefore', 'insertafter']],
add_file_common_args=True,
supports_check_mode=True
)
params = module.params
path = params['path']
if os.path.isdir(path):
module.fail_json(rc=256,
msg='Path %s is a directory !' % path)
path_exists = os.path.exists(path)
if not path_exists:
if not module.boolean(params['create']):
module.fail_json(rc=257,
msg='Path %s does not exist !' % path)
destpath = os.path.dirname(path)
if not os.path.exists(destpath) and not module.check_mode:
try:
os.makedirs(destpath)
except Exception as e:
module.fail_json(msg='Error creating %s Error code: %s Error description: %s' % (destpath, e[0], e[1]))
original = None
lines = []
else:
with open(path, 'rb') as f:
original = f.read()
lines = original.splitlines(True)
diff = {'before': '',
'after': '',
'before_header': '%s (content)' % path,
'after_header': '%s (content)' % path}
if module._diff and original:
diff['before'] = original
insertbefore = params['insertbefore']
insertafter = params['insertafter']
block = to_bytes(params['block'])
marker = to_bytes(params['marker'])
present = params['state'] == 'present'
if not present and not path_exists:
module.exit_json(changed=False, msg="File %s not present" % path)
if insertbefore is None and insertafter is None:
insertafter = 'EOF'
if insertafter not in (None, 'EOF'):
insertre = re.compile(to_bytes(insertafter, errors='surrogate_or_strict'))
elif insertbefore not in (None, 'BOF'):
insertre = re.compile(to_bytes(insertbefore, errors='surrogate_or_strict'))
else:
insertre = None
marker0 = re.sub(b(r'{mark}'), b(params['marker_begin']), marker) + b(os.linesep)
marker1 = re.sub(b(r'{mark}'), b(params['marker_end']), marker) + b(os.linesep)
if present and block:
if not block.endswith(b(os.linesep)):
block += b(os.linesep)
blocklines = [marker0] + block.splitlines(True) + [marker1]
else:
blocklines = []
n0 = n1 = None
for i, line in enumerate(lines):
if line == marker0:
n0 = i
if line == marker1:
n1 = i
if None in (n0, n1):
n0 = None
if insertre is not None:
if insertre.flags & re.MULTILINE:
match = insertre.search(original)
if match:
if insertafter:
n0 = to_native(original).count('\n', 0, match.end())
elif insertbefore:
n0 = to_native(original).count('\n', 0, match.start())
else:
for i, line in enumerate(lines):
if insertre.search(line):
n0 = i
if n0 is None:
n0 = len(lines)
elif insertafter is not None:
n0 += 1
elif insertbefore is not None:
n0 = 0 # insertbefore=BOF
else:
n0 = len(lines) # insertafter=EOF
elif n0 < n1:
lines[n0:n1 + 1] = []
else:
lines[n1:n0 + 1] = []
n0 = n1
# Ensure there is a line separator before the block of lines to be inserted
if n0 > 0:
if not lines[n0 - 1].endswith(b(os.linesep)):
lines[n0 - 1] += b(os.linesep)
lines[n0:n0] = blocklines
if lines:
result = b''.join(lines)
else:
result = b''
if module._diff:
diff['after'] = result
if original == result:
msg = ''
changed = False
elif original is None:
msg = 'File created'
changed = True
elif not blocklines:
msg = 'Block removed'
changed = True
else:
msg = 'Block inserted'
changed = True
backup_file = None
if changed and not module.check_mode:
if module.boolean(params['backup']) and path_exists:
backup_file = module.backup_local(path)
# We should always follow symlinks so that we change the real file
real_path = os.path.realpath(params['path'])
write_changes(module, result, real_path)
if module.check_mode and not path_exists:
module.exit_json(changed=changed, msg=msg, diff=diff)
attr_diff = {}
msg, changed = check_file_attrs(module, changed, msg, attr_diff)
attr_diff['before_header'] = '%s (file attributes)' % path
attr_diff['after_header'] = '%s (file attributes)' % path
difflist = [diff, attr_diff]
if backup_file is None:
module.exit_json(changed=changed, msg=msg, diff=difflist)
else:
module.exit_json(changed=changed, msg=msg, diff=difflist, backup_file=backup_file)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,448 |
Invalid memory facts on OpenBSD
|
### Summary
When running `ansible -m setup host`, reported `ansible_memtotal_mb` fact changes between invocations.
That's because `setup` reads available non-kernel memory, which fluctuates:
https://github.com/ansible/ansible/blob/2797dc644aa8c809444ccac64cad63e0d9a3f9fe/lib/ansible/module_utils/facts/hardware/openbsd.py#L96-L97
To read total available memory, `hw.physmem` key should be used. Refer to the https://man.openbsd.org/sysctl.2 man, `HW_USERMEM` and `HW_PHYSMEM`.
### Issue Type
Bug Report
### Component Name
setup
### Ansible Version
```console
All.
https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/hardware/openbsd.py#L97
```
### Configuration
```console
N/A
```
### OS / Environment
OpenBSD 7.0
### Steps to Reproduce
On OpenBSD host, run this few times:
```
ansible -m setup openbsd_host | grep memtotal
```
Total memory value will be fluctuation between calls.
### Expected Results
I expect to see total installed memory.
### Actual Results
```console
N/A
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77448
|
https://github.com/ansible/ansible/pull/79316
|
4759590467faa23776f527e049a1686505339d4f
|
eae42ec57e9ab1f80bca478ca87f784c0c65260b
| 2022-04-02T00:49:32Z |
python
| 2022-11-08T15:30:12Z |
lib/ansible/module_utils/facts/hardware/openbsd.py
|
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
import time
from ansible.module_utils._text import to_text
from ansible.module_utils.facts.hardware.base import Hardware, HardwareCollector
from ansible.module_utils.facts import timeout
from ansible.module_utils.facts.utils import get_file_content, get_mount_size
from ansible.module_utils.facts.sysctl import get_sysctl
class OpenBSDHardware(Hardware):
"""
OpenBSD-specific subclass of Hardware. Defines memory, CPU and device facts:
- memfree_mb
- memtotal_mb
- swapfree_mb
- swaptotal_mb
- processor (a list)
- processor_cores
- processor_count
- processor_speed
- uptime_seconds
In addition, it also defines number of DMI facts and device facts.
"""
platform = 'OpenBSD'
def populate(self, collected_facts=None):
hardware_facts = {}
self.sysctl = get_sysctl(self.module, ['hw'])
hardware_facts.update(self.get_processor_facts())
hardware_facts.update(self.get_memory_facts())
hardware_facts.update(self.get_device_facts())
hardware_facts.update(self.get_dmi_facts())
hardware_facts.update(self.get_uptime_facts())
# storage devices notorioslly prone to hang/block so they are under a timeout
try:
hardware_facts.update(self.get_mount_facts())
except timeout.TimeoutError:
pass
return hardware_facts
@timeout.timeout()
def get_mount_facts(self):
mount_facts = {}
mount_facts['mounts'] = []
fstab = get_file_content('/etc/fstab')
if fstab:
for line in fstab.splitlines():
if line.startswith('#') or line.strip() == '':
continue
fields = re.sub(r'\s+', ' ', line).split()
if fields[1] == 'none' or fields[3] == 'xx':
continue
mount_statvfs_info = get_mount_size(fields[1])
mount_info = {'mount': fields[1],
'device': fields[0],
'fstype': fields[2],
'options': fields[3]}
mount_info.update(mount_statvfs_info)
mount_facts['mounts'].append(mount_info)
return mount_facts
def get_memory_facts(self):
memory_facts = {}
# Get free memory. vmstat output looks like:
# procs memory page disks traps cpu
# r b w avm fre flt re pi po fr sr wd0 fd0 int sys cs us sy id
# 0 0 0 47512 28160 51 0 0 0 0 0 1 0 116 89 17 0 1 99
rc, out, err = self.module.run_command("/usr/bin/vmstat")
if rc == 0:
memory_facts['memfree_mb'] = int(out.splitlines()[-1].split()[4]) // 1024
memory_facts['memtotal_mb'] = int(self.sysctl['hw.usermem']) // 1024 // 1024
# Get swapctl info. swapctl output looks like:
# total: 69268 1K-blocks allocated, 0 used, 69268 available
# And for older OpenBSD:
# total: 69268k bytes allocated = 0k used, 69268k available
rc, out, err = self.module.run_command("/sbin/swapctl -sk")
if rc == 0:
swaptrans = {ord(u'k'): None,
ord(u'm'): None,
ord(u'g'): None}
data = to_text(out, errors='surrogate_or_strict').split()
memory_facts['swapfree_mb'] = int(data[-2].translate(swaptrans)) // 1024
memory_facts['swaptotal_mb'] = int(data[1].translate(swaptrans)) // 1024
return memory_facts
def get_uptime_facts(self):
# On openbsd, we need to call it with -n to get this value as an int.
sysctl_cmd = self.module.get_bin_path('sysctl')
cmd = [sysctl_cmd, '-n', 'kern.boottime']
rc, out, err = self.module.run_command(cmd)
if rc != 0:
return {}
kern_boottime = out.strip()
if not kern_boottime.isdigit():
return {}
return {
'uptime_seconds': int(time.time() - int(kern_boottime)),
}
def get_processor_facts(self):
cpu_facts = {}
processor = []
for i in range(int(self.sysctl['hw.ncpuonline'])):
processor.append(self.sysctl['hw.model'])
cpu_facts['processor'] = processor
# The following is partly a lie because there is no reliable way to
# determine the number of physical CPUs in the system. We can only
# query the number of logical CPUs, which hides the number of cores.
# On amd64/i386 we could try to inspect the smt/core/package lines in
# dmesg, however even those have proven to be unreliable.
# So take a shortcut and report the logical number of processors in
# 'processor_count' and 'processor_cores' and leave it at that.
cpu_facts['processor_count'] = self.sysctl['hw.ncpuonline']
cpu_facts['processor_cores'] = self.sysctl['hw.ncpuonline']
return cpu_facts
def get_device_facts(self):
device_facts = {}
devices = []
devices.extend(self.sysctl['hw.disknames'].split(','))
device_facts['devices'] = devices
return device_facts
def get_dmi_facts(self):
dmi_facts = {}
# We don't use dmidecode(8) here because:
# - it would add dependency on an external package
# - dmidecode(8) can only be ran as root
# So instead we rely on sysctl(8) to provide us the information on a
# best-effort basis. As a bonus we also get facts on non-amd64/i386
# platforms this way.
sysctl_to_dmi = {
'hw.product': 'product_name',
'hw.version': 'product_version',
'hw.uuid': 'product_uuid',
'hw.serialno': 'product_serial',
'hw.vendor': 'system_vendor',
}
for mib in sysctl_to_dmi:
if mib in self.sysctl:
dmi_facts[sysctl_to_dmi[mib]] = self.sysctl[mib]
return dmi_facts
class OpenBSDHardwareCollector(HardwareCollector):
_fact_class = OpenBSDHardware
_platform = 'OpenBSD'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,849 |
ansible.builtin.group should support "force" option, corresponding to `groupdel --force`
|
### Summary
As part of a project to rename a user, I created a second user and group with the same UID as the first, something with Ansible supports by expliciting setting the `uid` to an pre-existing value.
Now the transition is complete and it's time to remove the old group. But Ansible can't do it, it fails with an error:
"groupdel: cannot remove the primary group of user"
Arguably, this could be a considered a bug with "groupdel", because I wasn't deleting the *only* primary group of the user. If `groupdel` allowed the deletion, the user still would have a primary group-- the new one with the same gid.
It seems the solution that `groupdel` recommends though is to use the `--force` option, which is documented as being an override designed for cases like this. Here's the exact documentation for `--force` from `man groupdel`:
> This option forces the removal of the group, even if there's some
user having the group as the primary one.
Ansible should support `force: true` for the `group` builtin, defaulting to false. When `true`, On Linux call `groupdel` with the `-f` option.
Support using the longer `--force` flag was only fixed more recently (https://github.com/shadow-maint/shadow/pull/290). Since `-f` has been supported for longer, it should be used instead.
### Issue Type
Feature Idea
### Component Name
group
### Additional Information
```yaml
force: yes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77849
|
https://github.com/ansible/ansible/pull/78172
|
a3531ac422c727df0748812c73a38f9950eebda9
|
d72326b6af7dab2b2fdf0a13e6ae6946b734375e
| 2022-05-18T20:41:48Z |
python
| 2022-11-09T20:52:27Z |
changelogs/fragments/78172-allow-force-deletion-of-group.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,849 |
ansible.builtin.group should support "force" option, corresponding to `groupdel --force`
|
### Summary
As part of a project to rename a user, I created a second user and group with the same UID as the first, something with Ansible supports by expliciting setting the `uid` to an pre-existing value.
Now the transition is complete and it's time to remove the old group. But Ansible can't do it, it fails with an error:
"groupdel: cannot remove the primary group of user"
Arguably, this could be a considered a bug with "groupdel", because I wasn't deleting the *only* primary group of the user. If `groupdel` allowed the deletion, the user still would have a primary group-- the new one with the same gid.
It seems the solution that `groupdel` recommends though is to use the `--force` option, which is documented as being an override designed for cases like this. Here's the exact documentation for `--force` from `man groupdel`:
> This option forces the removal of the group, even if there's some
user having the group as the primary one.
Ansible should support `force: true` for the `group` builtin, defaulting to false. When `true`, On Linux call `groupdel` with the `-f` option.
Support using the longer `--force` flag was only fixed more recently (https://github.com/shadow-maint/shadow/pull/290). Since `-f` has been supported for longer, it should be used instead.
### Issue Type
Feature Idea
### Component Name
group
### Additional Information
```yaml
force: yes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77849
|
https://github.com/ansible/ansible/pull/78172
|
a3531ac422c727df0748812c73a38f9950eebda9
|
d72326b6af7dab2b2fdf0a13e6ae6946b734375e
| 2022-05-18T20:41:48Z |
python
| 2022-11-09T20:52:27Z |
lib/ansible/modules/group.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Stephen Fromm <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: group
version_added: "0.0.2"
short_description: Add or remove groups
requirements:
- groupadd
- groupdel
- groupmod
description:
- Manage presence of groups on a host.
- For Windows targets, use the M(ansible.windows.win_group) module instead.
options:
name:
description:
- Name of the group to manage.
type: str
required: true
gid:
description:
- Optional I(GID) to set for the group.
type: int
state:
description:
- Whether the group should be present or not on the remote host.
type: str
choices: [ absent, present ]
default: present
system:
description:
- If I(yes), indicates that the group created is a system group.
type: bool
default: no
local:
description:
- Forces the use of "local" command alternatives on platforms that implement it.
- This is useful in environments that use centralized authentication when you want to manipulate the local groups.
(for example, it uses C(lgroupadd) instead of C(groupadd)).
- This requires that these commands exist on the targeted host, otherwise it will be a fatal error.
type: bool
default: no
version_added: "2.6"
non_unique:
description:
- This option allows to change the group ID to a non-unique value. Requires C(gid).
- Not supported on macOS or BusyBox distributions.
type: bool
default: no
version_added: "2.8"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: none
platform:
platforms: posix
seealso:
- module: ansible.builtin.user
- module: ansible.windows.win_group
author:
- Stephen Fromm (@sfromm)
'''
EXAMPLES = '''
- name: Ensure group "somegroup" exists
ansible.builtin.group:
name: somegroup
state: present
- name: Ensure group "docker" exists with correct gid
ansible.builtin.group:
name: docker
state: present
gid: 1750
'''
RETURN = r'''
gid:
description: Group ID of the group.
returned: When C(state) is 'present'
type: int
sample: 1001
name:
description: Group name.
returned: always
type: str
sample: users
state:
description: Whether the group is present or not.
returned: always
type: str
sample: 'absent'
system:
description: Whether the group is a system group or not.
returned: When C(state) is 'present'
type: bool
sample: False
'''
import grp
import os
from ansible.module_utils._text import to_bytes
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.sys_info import get_platform_subclass
class Group(object):
"""
This is a generic Group manipulation class that is subclassed
based on platform.
A subclass may wish to override the following action methods:-
- group_del()
- group_add()
- group_mod()
All subclasses MUST define platform and distribution (which may be None).
"""
platform = 'Generic'
distribution = None # type: str | None
GROUPFILE = '/etc/group'
def __new__(cls, *args, **kwargs):
new_cls = get_platform_subclass(Group)
return super(cls, new_cls).__new__(new_cls)
def __init__(self, module):
self.module = module
self.state = module.params['state']
self.name = module.params['name']
self.gid = module.params['gid']
self.system = module.params['system']
self.local = module.params['local']
self.non_unique = module.params['non_unique']
def execute_command(self, cmd):
return self.module.run_command(cmd)
def group_del(self):
if self.local:
command_name = 'lgroupdel'
else:
command_name = 'groupdel'
cmd = [self.module.get_bin_path(command_name, True), self.name]
return self.execute_command(cmd)
def _local_check_gid_exists(self):
if self.gid:
for gr in grp.getgrall():
if self.gid == gr.gr_gid and self.name != gr.gr_name:
self.module.fail_json(msg="GID '{0}' already exists with group '{1}'".format(self.gid, gr.gr_name))
def group_add(self, **kwargs):
if self.local:
command_name = 'lgroupadd'
self._local_check_gid_exists()
else:
command_name = 'groupadd'
cmd = [self.module.get_bin_path(command_name, True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
elif key == 'system' and kwargs[key] is True:
cmd.append('-r')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
if self.local:
command_name = 'lgroupmod'
self._local_check_gid_exists()
else:
command_name = 'groupmod'
cmd = [self.module.get_bin_path(command_name, True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
def group_exists(self):
# The grp module does not distinguish between local and directory accounts.
# It's output cannot be used to determine whether or not a group exists locally.
# It returns True if the group exists locally or in the directory, so instead
# look in the local GROUP file for an existing account.
if self.local:
if not os.path.exists(self.GROUPFILE):
self.module.fail_json(msg="'local: true' specified but unable to find local group file {0} to parse.".format(self.GROUPFILE))
exists = False
name_test = '{0}:'.format(self.name)
with open(self.GROUPFILE, 'rb') as f:
reversed_lines = f.readlines()[::-1]
for line in reversed_lines:
if line.startswith(to_bytes(name_test)):
exists = True
break
if not exists:
self.module.warn(
"'local: true' specified and group was not found in {file}. "
"The local group may already exist if the local group database exists somewhere other than {file}.".format(file=self.GROUPFILE))
return exists
else:
try:
if grp.getgrnam(self.name):
return True
except KeyError:
return False
def group_info(self):
if not self.group_exists():
return False
try:
info = list(grp.getgrnam(self.name))
except KeyError:
return False
return info
# ===========================================
class SunOS(Group):
"""
This is a SunOS Group manipulation class. Solaris doesn't have
the 'system' group concept.
This overrides the following methods from the generic class:-
- group_add()
"""
platform = 'SunOS'
distribution = None
GROUPFILE = '/etc/group'
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('-g')
cmd.append(str(kwargs[key]))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class AIX(Group):
"""
This is a AIX Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'AIX'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('rmgroup', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('mkgroup', True)]
for key in kwargs:
if key == 'gid' and kwargs[key] is not None:
cmd.append('id=' + str(kwargs[key]))
elif key == 'system' and kwargs[key] is True:
cmd.append('-a')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('chgroup', True)]
info = self.group_info()
for key in kwargs:
if key == 'gid':
if kwargs[key] is not None and info[2] != int(kwargs[key]):
cmd.append('id=' + str(kwargs[key]))
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class FreeBsdGroup(Group):
"""
This is a FreeBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'FreeBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('pw', True), 'groupdel', self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupadd', self.name]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('pw', True), 'groupmod', self.name]
info = self.group_info()
cmd_len = len(cmd)
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
# modify the group if cmd will do anything
if cmd_len != len(cmd):
if self.module.check_mode:
return (0, '', '')
return self.execute_command(cmd)
return (None, '', '')
class DragonFlyBsdGroup(FreeBsdGroup):
"""
This is a DragonFlyBSD Group manipulation class.
It inherits all behaviors from FreeBsdGroup class.
"""
platform = 'DragonFly'
# ===========================================
class DarwinGroup(Group):
"""
This is a Mac macOS Darwin Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
group manipulation are done using dseditgroup(1).
"""
platform = 'Darwin'
distribution = None
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'create']
if self.gid is not None:
cmd += ['-i', str(self.gid)]
elif 'system' in kwargs and kwargs['system'] is True:
gid = self.get_lowest_available_system_gid()
if gid is not False:
self.gid = str(gid)
cmd += ['-i', str(self.gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_del(self):
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'delete']
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
def group_mod(self, gid=None):
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd = [self.module.get_bin_path('dseditgroup', True)]
cmd += ['-o', 'edit']
if gid is not None:
cmd += ['-i', str(gid)]
cmd += ['-L', self.name]
(rc, out, err) = self.execute_command(cmd)
return (rc, out, err)
return (None, '', '')
def get_lowest_available_system_gid(self):
# check for lowest available system gid (< 500)
try:
cmd = [self.module.get_bin_path('dscl', True)]
cmd += ['/Local/Default', '-list', '/Groups', 'PrimaryGroupID']
(rc, out, err) = self.execute_command(cmd)
lines = out.splitlines()
highest = 0
for group_info in lines:
parts = group_info.split(' ')
if len(parts) > 1:
gid = int(parts[-1])
if gid > highest and gid < 500:
highest = gid
if highest == 0 or highest == 499:
return False
return (highest + 1)
except Exception:
return False
class OpenBsdGroup(Group):
"""
This is a OpenBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'OpenBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class NetBsdGroup(Group):
"""
This is a NetBSD Group manipulation class.
This overrides the following methods from the generic class:-
- group_del()
- group_add()
- group_mod()
"""
platform = 'NetBSD'
distribution = None
GROUPFILE = '/etc/group'
def group_del(self):
cmd = [self.module.get_bin_path('groupdel', True), self.name]
return self.execute_command(cmd)
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('groupadd', True)]
if self.gid is not None:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
cmd.append(self.name)
return self.execute_command(cmd)
def group_mod(self, **kwargs):
cmd = [self.module.get_bin_path('groupmod', True)]
info = self.group_info()
if self.gid is not None and int(self.gid) != info[2]:
cmd.append('-g')
cmd.append(str(self.gid))
if self.non_unique:
cmd.append('-o')
if len(cmd) == 1:
return (None, '', '')
if self.module.check_mode:
return (0, '', '')
cmd.append(self.name)
return self.execute_command(cmd)
# ===========================================
class BusyBoxGroup(Group):
"""
BusyBox group manipulation class for systems that have addgroup and delgroup.
It overrides the following methods:
- group_add()
- group_del()
- group_mod()
"""
def group_add(self, **kwargs):
cmd = [self.module.get_bin_path('addgroup', True)]
if self.gid is not None:
cmd.extend(['-g', str(self.gid)])
if self.system:
cmd.append('-S')
cmd.append(self.name)
return self.execute_command(cmd)
def group_del(self):
cmd = [self.module.get_bin_path('delgroup', True), self.name]
return self.execute_command(cmd)
def group_mod(self, **kwargs):
# Since there is no groupmod command, modify /etc/group directly
info = self.group_info()
if self.gid is not None and self.gid != info[2]:
with open('/etc/group', 'rb') as f:
b_groups = f.read()
b_name = to_bytes(self.name)
b_current_group_string = b'%s:x:%d:' % (b_name, info[2])
b_new_group_string = b'%s:x:%d:' % (b_name, self.gid)
if b':%d:' % self.gid in b_groups:
self.module.fail_json(msg="gid '{gid}' in use".format(gid=self.gid))
if self.module.check_mode:
return 0, '', ''
b_new_groups = b_groups.replace(b_current_group_string, b_new_group_string)
with open('/etc/group', 'wb') as f:
f.write(b_new_groups)
return 0, '', ''
return None, '', ''
class AlpineGroup(BusyBoxGroup):
platform = 'Linux'
distribution = 'Alpine'
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'present']),
name=dict(type='str', required=True),
gid=dict(type='int'),
system=dict(type='bool', default=False),
local=dict(type='bool', default=False),
non_unique=dict(type='bool', default=False),
),
supports_check_mode=True,
required_if=[
['non_unique', True, ['gid']],
],
)
group = Group(module)
module.debug('Group instantiated - platform %s' % group.platform)
if group.distribution:
module.debug('Group instantiated - distribution %s' % group.distribution)
rc = None
out = ''
err = ''
result = {}
result['name'] = group.name
result['state'] = group.state
if group.state == 'absent':
if group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_del()
if rc != 0:
module.fail_json(name=group.name, msg=err)
elif group.state == 'present':
if not group.group_exists():
if module.check_mode:
module.exit_json(changed=True)
(rc, out, err) = group.group_add(gid=group.gid, system=group.system)
else:
(rc, out, err) = group.group_mod(gid=group.gid)
if rc is not None and rc != 0:
module.fail_json(name=group.name, msg=err)
if rc is None:
result['changed'] = False
else:
result['changed'] = True
if out:
result['stdout'] = out
if err:
result['stderr'] = err
if group.group_exists():
info = group.group_info()
result['system'] = group.system
result['gid'] = info[2]
module.exit_json(**result)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,849 |
ansible.builtin.group should support "force" option, corresponding to `groupdel --force`
|
### Summary
As part of a project to rename a user, I created a second user and group with the same UID as the first, something with Ansible supports by expliciting setting the `uid` to an pre-existing value.
Now the transition is complete and it's time to remove the old group. But Ansible can't do it, it fails with an error:
"groupdel: cannot remove the primary group of user"
Arguably, this could be a considered a bug with "groupdel", because I wasn't deleting the *only* primary group of the user. If `groupdel` allowed the deletion, the user still would have a primary group-- the new one with the same gid.
It seems the solution that `groupdel` recommends though is to use the `--force` option, which is documented as being an override designed for cases like this. Here's the exact documentation for `--force` from `man groupdel`:
> This option forces the removal of the group, even if there's some
user having the group as the primary one.
Ansible should support `force: true` for the `group` builtin, defaulting to false. When `true`, On Linux call `groupdel` with the `-f` option.
Support using the longer `--force` flag was only fixed more recently (https://github.com/shadow-maint/shadow/pull/290). Since `-f` has been supported for longer, it should be used instead.
### Issue Type
Feature Idea
### Component Name
group
### Additional Information
```yaml
force: yes
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77849
|
https://github.com/ansible/ansible/pull/78172
|
a3531ac422c727df0748812c73a38f9950eebda9
|
d72326b6af7dab2b2fdf0a13e6ae6946b734375e
| 2022-05-18T20:41:48Z |
python
| 2022-11-09T20:52:27Z |
test/integration/targets/group/tasks/tests.yml
|
---
- name: ensure test groups are deleted before the test
group:
name: '{{ item }}'
state: absent
loop:
- ansibullgroup
- ansibullgroup2
- ansibullgroup3
- block:
##
## group add
##
- name: create group (check mode)
group:
name: ansibullgroup
state: present
register: create_group_check
check_mode: true
- name: get result of create group (check mode)
script: 'grouplist.sh "{{ ansible_distribution }}"'
register: create_group_actual_check
- name: assert create group (check mode)
assert:
that:
- create_group_check is changed
- '"ansibullgroup" not in create_group_actual_check.stdout_lines'
- name: create group
group:
name: ansibullgroup
state: present
register: create_group
- name: get result of create group
script: 'grouplist.sh "{{ ansible_distribution }}"'
register: create_group_actual
- name: assert create group
assert:
that:
- create_group is changed
- create_group.gid is defined
- '"ansibullgroup" in create_group_actual.stdout_lines'
- name: create group (idempotent)
group:
name: ansibullgroup
state: present
register: create_group_again
- name: assert create group (idempotent)
assert:
that:
- not create_group_again is changed
##
## group check
##
- name: run existing group check tests
group:
name: "{{ create_group_actual.stdout_lines|random }}"
state: present
with_sequence: start=1 end=5
register: group_test1
- name: validate results for testcase 1
assert:
that:
- group_test1.results is defined
- group_test1.results|length == 5
- name: validate change results for testcase 1
assert:
that:
- not group_test1 is changed
##
## group add with gid
##
- name: get the next available gid
script: get_free_gid.py
args:
executable: '{{ ansible_python_interpreter }}'
register: gid
- name: create a group with a gid (check mode)
group:
name: ansibullgroup2
gid: '{{ gid.stdout_lines[0] }}'
state: present
register: create_group_gid_check
check_mode: true
- name: get result of create a group with a gid (check mode)
script: 'grouplist.sh "{{ ansible_distribution }}"'
register: create_group_gid_actual_check
- name: assert create group with a gid (check mode)
assert:
that:
- create_group_gid_check is changed
- '"ansibullgroup2" not in create_group_gid_actual_check.stdout_lines'
- name: create a group with a gid
group:
name: ansibullgroup2
gid: '{{ gid.stdout_lines[0] }}'
state: present
register: create_group_gid
- name: get gid of created group
script: "get_gid_for_group.py ansibullgroup2"
args:
executable: '{{ ansible_python_interpreter }}'
register: create_group_gid_actual
- name: assert create group with a gid
assert:
that:
- create_group_gid is changed
- create_group_gid.gid | int == gid.stdout_lines[0] | int
- create_group_gid_actual.stdout | trim | int == gid.stdout_lines[0] | int
- name: create a group with a gid (idempotent)
group:
name: ansibullgroup2
gid: '{{ gid.stdout_lines[0] }}'
state: present
register: create_group_gid_again
- name: assert create group with a gid (idempotent)
assert:
that:
- not create_group_gid_again is changed
- create_group_gid_again.gid | int == gid.stdout_lines[0] | int
- block:
- name: create a group with a non-unique gid
group:
name: ansibullgroup3
gid: '{{ gid.stdout_lines[0] }}'
non_unique: true
state: present
register: create_group_gid_non_unique
- name: validate gid required with non_unique
group:
name: foo
non_unique: true
register: missing_gid
ignore_errors: true
- name: assert create group with a non unique gid
assert:
that:
- create_group_gid_non_unique is changed
- create_group_gid_non_unique.gid | int == gid.stdout_lines[0] | int
- missing_gid is failed
when: ansible_facts.distribution not in ['MacOSX', 'Alpine']
##
## group remove
##
- name: delete group (check mode)
group:
name: ansibullgroup
state: absent
register: delete_group_check
check_mode: true
- name: get result of delete group (check mode)
script: 'grouplist.sh "{{ ansible_distribution }}"'
register: delete_group_actual_check
- name: assert delete group (check mode)
assert:
that:
- delete_group_check is changed
- '"ansibullgroup" in delete_group_actual_check.stdout_lines'
- name: delete group
group:
name: ansibullgroup
state: absent
register: delete_group
- name: get result of delete group
script: 'grouplist.sh "{{ ansible_distribution }}"'
register: delete_group_actual
- name: assert delete group
assert:
that:
- delete_group is changed
- '"ansibullgroup" not in delete_group_actual.stdout_lines'
- name: delete group (idempotent)
group:
name: ansibullgroup
state: absent
register: delete_group_again
- name: assert delete group (idempotent)
assert:
that:
- not delete_group_again is changed
- name: Ensure lgroupadd is present
action: "{{ ansible_facts.pkg_mgr }}"
args:
name: libuser
state: present
when: ansible_facts.system in ['Linux'] and ansible_distribution != 'Alpine' and ansible_os_family != 'Suse'
tags:
- user_test_local_mode
- name: Ensure lgroupadd is present - Alpine
command: apk add -U libuser
when: ansible_distribution == 'Alpine'
tags:
- user_test_local_mode
# https://github.com/ansible/ansible/issues/56481
- block:
- name: Test duplicate GID with local=yes
group:
name: "{{ item }}"
gid: 1337
local: true
loop:
- group1_local_test
- group2_local_test
ignore_errors: true
register: local_duplicate_gid_result
- assert:
that:
- local_duplicate_gid_result['results'][0] is success
- local_duplicate_gid_result['results'][1]['msg'] == "GID '1337' already exists with group 'group1_local_test'"
always:
- name: Cleanup
group:
name: group1_local_test
state: absent
# only applicable to Linux, limit further to CentOS where 'luseradd' is installed
when: ansible_distribution == 'CentOS'
# https://github.com/ansible/ansible/pull/59769
- block:
- name: create a local group with a gid
group:
name: group1_local_test
gid: 1337
local: true
state: present
register: create_local_group_gid
- name: get gid of created local group
script: "get_gid_for_group.py group1_local_test"
args:
executable: '{{ ansible_python_interpreter }}'
register: create_local_group_gid_actual
- name: assert create local group with a gid
assert:
that:
- create_local_group_gid is changed
- create_local_group_gid.gid | int == 1337 | int
- create_local_group_gid_actual.stdout | trim | int == 1337 | int
- name: create a local group with a gid (idempotent)
group:
name: group1_local_test
gid: 1337
state: present
register: create_local_group_gid_again
- name: assert create local group with a gid (idempotent)
assert:
that:
- not create_local_group_gid_again is changed
- create_local_group_gid_again.gid | int == 1337 | int
always:
- name: Cleanup create local group with a gid
group:
name: group1_local_test
state: absent
# only applicable to Linux, limit further to CentOS where 'luseradd' is installed
when: ansible_distribution == 'CentOS'
# https://github.com/ansible/ansible/pull/59772
- block:
- name: create group with a gid
group:
name: group1_test
gid: 1337
local: false
state: present
register: create_group_gid
- name: get gid of created group
script: "get_gid_for_group.py group1_test"
args:
executable: '{{ ansible_python_interpreter }}'
register: create_group_gid_actual
- name: assert create group with a gid
assert:
that:
- create_group_gid is changed
- create_group_gid.gid | int == 1337 | int
- create_group_gid_actual.stdout | trim | int == 1337 | int
- name: create local group with the same gid
group:
name: group1_test
gid: 1337
local: true
state: present
register: create_local_group_gid
- name: assert create local group with a gid
assert:
that:
- create_local_group_gid.gid | int == 1337 | int
always:
- name: Cleanup create group with a gid
group:
name: group1_test
local: false
state: absent
- name: Cleanup create local group with the same gid
group:
name: group1_test
local: true
state: absent
# only applicable to Linux, limit further to CentOS where 'lgroupadd' is installed
when: ansible_distribution == 'CentOS'
# create system group
- name: remove group
group:
name: ansibullgroup
state: absent
- name: create system group
group:
name: ansibullgroup
state: present
system: true
always:
- name: remove test groups after test
group:
name: '{{ item }}'
state: absent
loop:
- ansibullgroup
- ansibullgroup2
- ansibullgroup3
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 75,794 |
Example doesn't work: use a loop to create exponential backoff for retries/until
|
### Summary
In the **Data manipulation** page of the User Guide, the section [Loops and list comprehensions](https://docs.ansible.com/ansible/latest/user_guide/complex_data_manipulation.html#loops-and-list-comprehensions) contains this example code:
```yaml
- name: retry ping 10 times with exponential backup delay
ping:
retries: 10
delay: '{{item|int}}'
loop: '{{ range(1, 10)|map('pow', 2) }}'
```
This code will not work:
- Ping ["does not make sense in playbooks"](https://docs.ansible.com/ansible/2.3/ping_module.html#synopsis)
- The loop expression contains unescaped quotes
- `retries` is used with `until` to repeat a task until a condition is met. Combining it with a loop creates a loop within a loop.
In any case, in this example it might be cleaner not to use `map` at all, but to use a simple integer sequence as the loop variable and apply the exponential function to the delay:
```yaml
delay: "{{ item| power(2) | int }}"
loop: "{{ range(1, 10) }}"
```
These changes can be made, e.g. using `uri` instead of `ping`, but it takes extra code to achieve the desired result. E.g. consider this playbook:
```yaml
---
- name: Test URI delay
hosts: localhost
tasks:
- name: URI call
vars:
uri_task:
status: -1
uri:
url: https://{{ site | default("google") }}.com
timeout: 1
failed_when: false
retries: 1
delay: "{{ item | pow(2) | int }}"
until: uri_task.status != -1
loop: "{{ range(1, 4) }}"
register: uri_task
when: uri_task.status != 200
- name: Count URI calls
set_fact:
uri_count: "{{ uri_task.results | selectattr('skipped', 'undefined') | length }}"
- name: Report result
debug:
msg: "{{ 'HTTP response ' ~ uri_task.results[uri_count | int - 1].status ~ ' on try ' ~ uri_count }}"
```
Running it with an invalid site name gives exponentially delayed retries, as desired:
`ansible-playbook test-uri.yml -e "site=googlez"`
```
TASK [URI call] *********************************************************************************************************************************************************
FAILED - RETRYING: URI call (1 retries left).
ok: [localhost] => (item=1)
FAILED - RETRYING: URI call (1 retries left).
ok: [localhost] => (item=2)
FAILED - RETRYING: URI call (1 retries left).
ok: [localhost] => (item=3)
TASK [Count URI calls] **************************************************************************************************************************************************
ok: [localhost]
TASK [Report result] ****************************************************************************************************************************************************
ok: [localhost] => {
"msg": "HTTP response -1 on try 3"
}
```
Running it with the default valid site name ("google") skips the retries and skips remaining loop iterations:
`ansible-playbook test-uri.yml`
```
TASK [URI call] *********************************************************************************************************************************************************
ok: [localhost] => (item=1)
skipping: [localhost] => (item=2)
skipping: [localhost] => (item=3)
TASK [Count URI calls] **************************************************************************************************************************************************
ok: [localhost]
TASK [Report result] ****************************************************************************************************************************************************
ok: [localhost] => {
"msg": "HTTP response 200 on try 1"
}
```
For the purpose of this documentation page, it might be better to choose a different example, e.g. converting each element in a list to Title Case:
```yaml
- name: Create list
set_fact:
my_list:
- foo
- bar
- name: Convert each list element to title case
debug:
msg: "{{ my_list | map('title') }}"
```
```
=> {
"msg": [
"Foo",
"Bar"
]
}
```
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/complex_data_manipulation.rst
### Ansible Version
```console
$ ansible --version
ansible 2.10.6
config file = /home/jdoig/repos/emmet-iac/ansible.cfg
configured module search path = ['/home/jdoig/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /var/lib/awx/venv/sputnik-default/lib/python3.6/site-packages/ansible
executable location = /var/lib/awx/venv/sputnik-default/bin/ansible
python version = 3.6.12 (default, Sep 15 2020, 12:49:50) [GCC 4.8.5 20150623 (Red Hat 4.8.5-37)]
```
### Configuration
```console
$ ansible-config dump --only-changed
```
### OS / Environment
Red Hat Enterprise Linux Server release 7.9 (Maipo)
### Additional Information
The suggested change replaces a broken code example with a working one.
The broken example code was introduced as described in [this comment on issue 20226](https://github.com/ansible/ansible/issues/20226#issuecomment-726853897) which should perhaps be reopened.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/75794
|
https://github.com/ansible/ansible/pull/79356
|
d72326b6af7dab2b2fdf0a13e6ae6946b734375e
|
e7730f5d6cfceba69b99808e7d390ef60d014ec3
| 2021-09-27T03:53:36Z |
python
| 2022-11-09T21:03:50Z |
docs/docsite/rst/playbook_guide/complex_data_manipulation.rst
|
.. _complex_data_manipulation:
Manipulating data
#################
In many cases, you need to do some complex operation with your variables, while Ansible is not recommended as a data processing/manipulation tool, you can use the existing Jinja2 templating in conjunction with the many added Ansible filters, lookups and tests to do some very complex transformations.
Let's start with a quick definition of each type of plugin:
- lookups: Mainly used to query 'external data', in Ansible these were the primary part of loops using the ``with_<lookup>`` construct, but they can be used independently to return data for processing. They normally return a list due to their primary function in loops as mentioned previously. Used with the ``lookup`` or ``query`` Jinja2 operators.
- filters: used to change/transform data, used with the ``|`` Jinja2 operator.
- tests: used to validate data, used with the ``is`` Jinja2 operator.
.. _note:
* Some tests and filters are provided directly by Jinja2, so their availability depends on the Jinja2 version, not Ansible.
.. _for_loops_or_list_comprehensions:
Loops and list comprehensions
=============================
Most programming languages have loops (``for``, ``while``, and so on) and list comprehensions to do transformations on lists including lists of objects. Jinja2 has a few filters that provide this functionality: ``map``, ``select``, ``reject``, ``selectattr``, ``rejectattr``.
- map: this is a basic for loop that just allows you to change every item in a list, using the 'attribute' keyword you can do the transformation based on attributes of the list elements.
- select/reject: this is a for loop with a condition, that allows you to create a subset of a list that matches (or not) based on the result of the condition.
- selectattr/rejectattr: very similar to the above but it uses a specific attribute of the list elements for the conditional statement.
.. _exponential_backoff:
Use a loop to create exponential backoff for retries/until.
.. code-block:: yaml
- name: retry ping 10 times with exponential backoff delay
ping:
retries: 10
delay: '{{item|int}}'
loop: '{{ range(1, 10)|map('pow', 2) }}'
.. _keys_from_dict_matching_list:
Extract keys from a dictionary matching elements from a list
------------------------------------------------------------
The Python equivalent code would be:
.. code-block:: python
chains = [1, 2]
for chain in chains:
for config in chains_config[chain]['configs']:
print(config['type'])
There are several ways to do it in Ansible, this is just one example:
.. code-block:: YAML+Jinja
:emphasize-lines: 4
:caption: Way to extract matching keys from a list of dictionaries
tasks:
- name: Show extracted list of keys from a list of dictionaries
ansible.builtin.debug:
msg: "{{ chains | map('extract', chains_config) | map(attribute='configs') | flatten | map(attribute='type') | flatten }}"
vars:
chains: [1, 2]
chains_config:
1:
foo: bar
configs:
- type: routed
version: 0.1
- type: bridged
version: 0.2
2:
foo: baz
configs:
- type: routed
version: 1.0
- type: bridged
version: 1.1
.. code-block:: ansible-output
:caption: Results of debug task, a list with the extracted keys
ok: [localhost] => {
"msg": [
"routed",
"bridged",
"routed",
"bridged"
]
}
.. code-block:: YAML+Jinja
:caption: Get the unique list of values of a variable that vary per host
vars:
unique_value_list: "{{ groups['all'] | map ('extract', hostvars, 'varname') | list | unique}}"
.. _find_mount_point:
Find mount point
----------------
In this case, we want to find the mount point for a given path across our machines, since we already collect mount facts, we can use the following:
.. code-block:: YAML+Jinja
:caption: Use selectattr to filter mounts into list I can then sort and select the last from
:emphasize-lines: 8
- hosts: all
gather_facts: True
vars:
path: /var/lib/cache
tasks:
- name: The mount point for {{path}}, found using the Ansible mount facts, [-1] is the same as the 'last' filter
ansible.builtin.debug:
msg: "{{(ansible_facts.mounts | selectattr('mount', 'in', path) | list | sort(attribute='mount'))[-1]['mount']}}"
.. _omit_elements_from_list:
Omit elements from a list
-------------------------
The special ``omit`` variable ONLY works with module options, but we can still use it in other ways as an identifier to tailor a list of elements:
.. code-block:: YAML+Jinja
:caption: Inline list filtering when feeding a module option
:emphasize-lines: 3, 6
- name: Enable a list of Windows features, by name
ansible.builtin.set_fact:
win_feature_list: "{{ namestuff | reject('equalto', omit) | list }}"
vars:
namestuff:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
Another way is to avoid adding elements to the list in the first place, so you can just use it directly:
.. code-block:: YAML+Jinja
:caption: Using set_fact in a loop to increment a list conditionally
:emphasize-lines: 3, 4, 6
- name: Build unique list with some items conditionally omitted
ansible.builtin.set_fact:
namestuff: ' {{ (namestuff | default([])) | union([item]) }}'
when: item != omit
loop:
- "{{ (fs_installed_smb_v1 | default(False)) | ternary(omit, 'FS-SMB1') }}"
- "foo"
- "bar"
.. _combine_optional_values:
Combine values from same list of dicts
---------------------------------------
Combining positive and negative filters from examples above, you can get a 'value when it exists' and a 'fallback' when it doesn't.
.. code-block:: YAML+Jinja
:caption: Use selectattr and rejectattr to get the ansible_host or inventory_hostname as needed
- hosts: localhost
tasks:
- name: Check hosts in inventory that respond to ssh port
wait_for:
host: "{{ item }}"
port: 22
loop: '{{ has_ah + no_ah }}'
vars:
has_ah: '{{ hostvars|dictsort|selectattr("1.ansible_host", "defined")|map(attribute="1.ansible_host")|list }}'
no_ah: '{{ hostvars|dictsort|rejectattr("1.ansible_host", "defined")|map(attribute="0")|list }}'
.. _custom_fileglob_variable:
Custom Fileglob Based on a Variable
-----------------------------------
This example uses `Python argument list unpacking <https://docs.python.org/3/tutorial/controlflow.html#unpacking-argument-lists>`_ to create a custom list of fileglobs based on a variable.
.. code-block:: YAML+Jinja
:caption: Using fileglob with a list based on a variable.
- hosts: all
vars:
mygroups:
- prod
- web
tasks:
- name: Copy a glob of files based on a list of groups
copy:
src: "{{ item }}"
dest: "/tmp/{{ item }}"
loop: '{{ q("fileglob", *globlist) }}'
vars:
globlist: '{{ mygroups | map("regex_replace", "^(.*)$", "files/\1/*.conf") | list }}'
.. _complex_type_transformations:
Complex Type transformations
=============================
Jinja provides filters for simple data type transformations (``int``, ``bool``, and so on), but when you want to transform data structures things are not as easy.
You can use loops and list comprehensions as shown above to help, also other filters and lookups can be chained and used to achieve more complex transformations.
.. _create_dictionary_from_list:
Create dictionary from list
---------------------------
In most languages it is easy to create a dictionary (a.k.a. map/associative array/hash and so on) from a list of pairs, in Ansible there are a couple of ways to do it and the best one for you might depend on the source of your data.
These example produces ``{"a": "b", "c": "d"}``
.. code-block:: YAML+Jinja
:caption: Simple list to dict by assuming the list is [key, value , key, value, ...]
vars:
single_list: [ 'a', 'b', 'c', 'd' ]
mydict: "{{ dict(single_list | slice(2)) }}"
.. code-block:: YAML+Jinja
:caption: It is simpler when we have a list of pairs:
vars:
list_of_pairs: [ ['a', 'b'], ['c', 'd'] ]
mydict: "{{ dict(list_of_pairs) }}"
Both end up being the same thing, with ``slice(2)`` transforming ``single_list`` to a ``list_of_pairs`` generator.
A bit more complex, using ``set_fact`` and a ``loop`` to create/update a dictionary with key value pairs from 2 lists:
.. code-block:: YAML+Jinja
:caption: Using set_fact to create a dictionary from a set of lists
:emphasize-lines: 3, 4
- name: Uses 'combine' to update the dictionary and 'zip' to make pairs of both lists
ansible.builtin.set_fact:
mydict: "{{ mydict | default({}) | combine({item[0]: item[1]}) }}"
loop: "{{ (keys | zip(values)) | list }}"
vars:
keys:
- foo
- var
- bar
values:
- a
- b
- c
This results in ``{"foo": "a", "var": "b", "bar": "c"}``.
You can even combine these simple examples with other filters and lookups to create a dictionary dynamically by matching patterns to variable names:
.. code-block:: YAML+Jinja
:caption: Using 'vars' to define dictionary from a set of lists without needing a task
vars:
xyz_stuff: 1234
xyz_morestuff: 567
myvarnames: "{{ q('varnames', '^xyz_') }}"
mydict: "{{ dict(myvarnames|map('regex_replace', '^xyz_', '')|list | zip(q('vars', *myvarnames))) }}"
A quick explanation, since there is a lot to unpack from these two lines:
- The ``varnames`` lookup returns a list of variables that match "begin with ``xyz_``".
- Then feeding the list from the previous step into the ``vars`` lookup to get the list of values.
The ``*`` is used to 'dereference the list' (a pythonism that works in Jinja), otherwise it would take the list as a single argument.
- Both lists get passed to the ``zip`` filter to pair them off into a unified list (key, value, key2, value2, ...).
- The dict function then takes this 'list of pairs' to create the dictionary.
An example on how to use facts to find a host's data that meets condition X:
.. code-block:: YAML+Jinja
vars:
uptime_of_host_most_recently_rebooted: "{{ansible_play_hosts_all | map('extract', hostvars, 'ansible_uptime_seconds') | sort | first}}"
An example to show a host uptime in days/hours/minutes/seconds (assumes facts where gathered).
.. code-block:: YAML+Jinja
- name: Show the uptime in days/hours/minutes/seconds
ansible.builtin.debug:
msg: Uptime {{ now().replace(microsecond=0) - now().fromtimestamp(now(fmt='%s') | int - ansible_uptime_seconds) }}
.. seealso::
:ref:`playbooks_filters`
Jinja2 filters included with Ansible
:ref:`playbooks_tests`
Jinja2 tests included with Ansible
`Jinja2 Docs <https://jinja.palletsprojects.com/>`_
Jinja2 documentation, includes lists for core filters and tests
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,264 |
Docs request: ssh_args in Ansible Configuration Settings
|
### Summary
In the FAQ there's a section talking about `ssh_args`: https://docs.ansible.com/ansible/latest/reference_appendices/faq.html#how-do-i-get-ansible-to-notice-a-dead-target-in-a-timely-manner
However, when you go to the config reference: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings
There is no mention of `ssh_args`. Can we add `ssh_args` to the Ansible Configuration Settings page?
It seems in 2.4 it was documented, not sure why it went away: https://docs.ansible.com/ansible/2.4/intro_configuration.html#ssh-args
### Issue Type
Documentation Report
### Component Name
unsure
### Ansible Version
```console
$ ansible --version
"latest" in docs
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
n/a
```
### OS / Environment
n/a
### Additional Information
n/a
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79264
|
https://github.com/ansible/ansible/pull/79308
|
183c34db6570472ced06e38c8be79c78150e1f4b
|
f9451dfaf89bbab83e6ec19fc7e3954c83ec4f13
| 2022-10-31T17:54:22Z |
python
| 2022-11-10T10:35:02Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _collections_transition:
Where did all the modules go?
+++++++++++++++++++++++++++++
In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10).
* To learn more about using collections, see :ref:`collections`.
* To learn more about developing collections, see :ref:`developing_collections`.
* To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections.
.. _find_my_module:
Where did this specific module go?
++++++++++++++++++++++++++++++++++
IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`.
.. _slow_install:
How can I speed up Ansible on systems with slow disks?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible may feel sluggish on systems with slow disks, such as Raspberry PI. See `Ansible might be running slow if libyaml is not available <https://www.jeffgeerling.com/blog/2021/ansible-might-be-running-slow-if-libyaml-not-available>`_ for hints on how to improve this.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire play?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play.
.. code-block:: yaml
shell:
cmd: date
environment:
LANG=fr_FR.UTF-8
.. code-block:: yaml
hosts: servers
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a ``ProxyCommand`` in the
``ansible_ssh_common_args`` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents:
.. code-block:: yaml
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group ``gatewayed``. (These arguments are used
in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to
repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.)
Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute ``nc %h:%p`` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``,
or globally by setting ``ssh_args`` in ``ansible.cfg``.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option,
SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval``
into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that
``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _cloud_provider_performance:
How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of machines of a cloud provider from your laptop.
Rather connect to a management node inside this cloud provider first and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter through
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python through
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using
the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available through pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on macOS
----------------
When executing Ansible on a system with macOS as a controller machine one might encounter the following error:
.. error::
+[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
ERROR! A worker was found in a dead state
In general the recommended workaround is to set the following environment variable in your shell:
.. code-block:: shell
$ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set:
.. code-block:: ini
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this:
.. code-block:: ini
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so:
.. code-block:: ini
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode through sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
.. code-block:: ini
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``:
.. code-block:: ini
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
Running under fakeroot
----------------------
Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default.
It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`).
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or through environment variable) you can set:
.. code-block:: ini
remote_tmp=$HOME/.ansible/tmp
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1``
in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module
as an ad hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe
the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question
if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers:
.. code-block:: yaml
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
through a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname``
is a magic variable that indicates the current host you are looping over in the host loop.
In the example above, if your interface names have dashes, you must replace them with underscores:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }}
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables,
they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact:
.. code-block:: yaml+jinja
- set_fact: headnode={{ groups['webservers'][0] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient
for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine:
.. code-block:: yaml+jinja
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available through facts in the ``ansible_env`` variable:
.. code-block:: jinja
{{ ansible_env.HOME }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment`
in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the
:ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>`
modules to introduce environment variables into files. The exact files to edit vary depending on your OS
and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The ``mkpasswd`` utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (for example, you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called ``encrypt (1)``
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
* Example of incorrect syntax when ``item`` is a dictionary:
.. code-block:: jinja
item.update
This variant causes a syntax error because ``update()`` is a Python method for dictionaries.
* Example of correct syntax:
.. code-block:: jinja
item['update']
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this:
.. code-block:: yaml+jinja
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence
found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take
precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding
with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services
and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our
`webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / GUI?
++++++++++++++++++++++++++++++++++++++++++++
Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`.
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful:
.. code-block:: yaml+jinja
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play:
.. code-block:: yaml
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself through
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without
specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' ~ other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' ~ other_var) }}
To determine if a keyword requires ``{{ }}`` or even supports templating, use ``ansible-doc -t keyword <name>``,
this will return documentation on the keyword including a ``template`` field with the values ``explicit`` (requires ``{{ }}``),
``implicit`` (assumes ``{{ }}``, so no needed) or ``static`` (no templating supported, all characters will be interpreted literally)
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original through ``hostvars``:
.. code-block:: yaml+jinja
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism:
.. error::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _mfa_support:
Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction.
As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction.
Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also
tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or
a long set of tasks.
In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures.
With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution.
.. _complex_configuration_validation:
The 'validate' option is not enough for my needs, what do I do?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails.
This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools
for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature.
For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue
and backups, which most file based modules also support:
.. code-block:: yaml
- name: update config and backout if validation fails
block:
- name: do the actual update, works with copy, lineinfile and any action that allows for `backup`.
template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff
register: updated
- name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise.
shell: run_validation_commmand
become: true
become_user: requiredbyapp
environment:
WEIRD_REQUIREMENT: 1
rescue:
- name: restore backup file to original, in the hope the previous configuration was working.
copy:
remote_src: true
dest: /x/y/z
src: "{{ updated['backup_file'] }}"
always:
- name: We choose to always delete backup, but could copy or move, or only delete in rescue.
file:
path: "{{ updated['backup_file'] }}"
state: absent
.. _jinja2_faqs:
Why does the ``regex_search`` filter return `None` instead of an empty string?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Until the jinja2 2.10 release, Jinja was only able to return strings, but Ansible needed Python objects in some cases. Ansible uses ``safe_eval`` and only sends strings that look like certain types of Python objects through this function. With ``regex_search`` that does not find a match, the result (``None``) is converted to the string "None" which is not useful in non-native jinja2.
The following example of a single templating action shows this behavior:
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') }}
This example does not result in a Python ``None``, so Ansible historically converted it to "" (empty string).
The native jinja2 functionality actually allows us to return full Python objects, that are always represented as Python objects everywhere, and as such the result of a single templating action with ``regex_search`` can result in the Python ``None``.
.. note::
Native jinja2 functionality is not needed when ``regex_search`` is used as an intermediate result that is then compared to the jinja2 ``none`` test.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') is none }}
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Documentation for Ansible is kept in the main project git repository, and complete instructions
for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _legacy_vs_builtin:
What is the difference between ``ansible.legacy`` and ``ansible.builtin`` collections?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neither is a real collection. They are virtually constructed by the core engine (synthetic collections).
The ``ansible.builtin`` collection only refers to plugins that ship with ``ansible-core``.
The ``ansible.legacy`` collection is a superset of ``ansible.builtin`` (you can reference the plugins from builtin through ``ansible.legacy``). You also get the ability to
add 'custom' plugins in the :ref:`configured paths and adjacent directories <ansible_search_path>`, with the ability to override the builtin plugins that have the same name.
Also, ``ansible.legacy`` is what you get by default when you do not specify an FQCN.
So this:
.. code-block:: yaml
- shell: echo hi
Is really equivalent to:
.. code-block:: yaml
- ansible.legacy.shell: echo hi
Though, if you do not override the ``shell`` module, you can also just write it as ``ansible.builtin.shell``, since legacy will resolve to the builtin collection.
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,612 |
de-duplicate and integrate the sanity-testing pages
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We have a page for each sanity test in https://github.com/ansible/ansible/tree/devel/docs/docsite/rst/dev_guide/testing/sanity. At least two of these are merely links to pages in the `dev_guide` directory:
- [x] [PEP8](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/pep8.html)
- [x] [validate-modules](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/validate-modules.html)
Both were converted from `README.md` files in https://github.com/ansible/ansible/pull/24094/files.
The bot (and in future, possibly the output of `ansible-test`) directs users to the files in `testing/sanity`. Consolidate the content, keep the files in `docs/docsite/rst/dev_guide/testing/sanity`, and remove the duplicates in `docs/docsite/rst/dev_guide`.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
testing
##### ANSIBLE VERSION
2.9
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/59612
|
https://github.com/ansible/ansible/pull/79380
|
d925ece764b4943d92897abd01ced7209cdf133e
|
a954918b6095adf52c663bdcc340c55762189393
| 2019-07-25T23:19:47Z |
python
| 2022-11-17T21:20:20Z |
docs/docsite/rst/dev_guide/testing/sanity/pep8.rst
|
pep8
====
Python static analysis for PEP 8 style guideline compliance.
See :ref:`testing_pep8` for more information.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,612 |
de-duplicate and integrate the sanity-testing pages
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We have a page for each sanity test in https://github.com/ansible/ansible/tree/devel/docs/docsite/rst/dev_guide/testing/sanity. At least two of these are merely links to pages in the `dev_guide` directory:
- [x] [PEP8](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/pep8.html)
- [x] [validate-modules](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/validate-modules.html)
Both were converted from `README.md` files in https://github.com/ansible/ansible/pull/24094/files.
The bot (and in future, possibly the output of `ansible-test`) directs users to the files in `testing/sanity`. Consolidate the content, keep the files in `docs/docsite/rst/dev_guide/testing/sanity`, and remove the duplicates in `docs/docsite/rst/dev_guide`.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
testing
##### ANSIBLE VERSION
2.9
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/59612
|
https://github.com/ansible/ansible/pull/79380
|
d925ece764b4943d92897abd01ced7209cdf133e
|
a954918b6095adf52c663bdcc340c55762189393
| 2019-07-25T23:19:47Z |
python
| 2022-11-17T21:20:20Z |
docs/docsite/rst/dev_guide/testing/sanity/validate-modules.rst
|
validate-modules
================
Analyze modules for common issues in code and documentation.
See :ref:`testing_validate-modules` for more information.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,612 |
de-duplicate and integrate the sanity-testing pages
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We have a page for each sanity test in https://github.com/ansible/ansible/tree/devel/docs/docsite/rst/dev_guide/testing/sanity. At least two of these are merely links to pages in the `dev_guide` directory:
- [x] [PEP8](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/pep8.html)
- [x] [validate-modules](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/validate-modules.html)
Both were converted from `README.md` files in https://github.com/ansible/ansible/pull/24094/files.
The bot (and in future, possibly the output of `ansible-test`) directs users to the files in `testing/sanity`. Consolidate the content, keep the files in `docs/docsite/rst/dev_guide/testing/sanity`, and remove the duplicates in `docs/docsite/rst/dev_guide`.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
testing
##### ANSIBLE VERSION
2.9
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/59612
|
https://github.com/ansible/ansible/pull/79380
|
d925ece764b4943d92897abd01ced7209cdf133e
|
a954918b6095adf52c663bdcc340c55762189393
| 2019-07-25T23:19:47Z |
python
| 2022-11-17T21:20:20Z |
docs/docsite/rst/dev_guide/testing_pep8.rst
|
:orphan:
.. _testing_pep8:
*****
PEP 8
*****
.. contents:: Topics
`PEP 8`_ style guidelines are enforced by `pycodestyle`_ on all python files in the repository by default.
Running Locally
===============
The `PEP 8`_ check can be run locally with:
.. code-block:: shell
ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
.. _PEP 8: https://www.python.org/dev/peps/pep-0008/
.. _pycodestyle: https://pypi.org/project/pycodestyle/
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 59,612 |
de-duplicate and integrate the sanity-testing pages
|
<!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
We have a page for each sanity test in https://github.com/ansible/ansible/tree/devel/docs/docsite/rst/dev_guide/testing/sanity. At least two of these are merely links to pages in the `dev_guide` directory:
- [x] [PEP8](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/pep8.html)
- [x] [validate-modules](https://docs.ansible.com/ansible/latest/dev_guide/testing/sanity/validate-modules.html)
Both were converted from `README.md` files in https://github.com/ansible/ansible/pull/24094/files.
The bot (and in future, possibly the output of `ansible-test`) directs users to the files in `testing/sanity`. Consolidate the content, keep the files in `docs/docsite/rst/dev_guide/testing/sanity`, and remove the duplicates in `docs/docsite/rst/dev_guide`.
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
docs.ansible.com
testing
##### ANSIBLE VERSION
2.9
##### CONFIGURATION
N/A
##### OS / ENVIRONMENT
N/A
|
https://github.com/ansible/ansible/issues/59612
|
https://github.com/ansible/ansible/pull/79380
|
d925ece764b4943d92897abd01ced7209cdf133e
|
a954918b6095adf52c663bdcc340c55762189393
| 2019-07-25T23:19:47Z |
python
| 2022-11-17T21:20:20Z |
docs/docsite/rst/dev_guide/testing_validate-modules.rst
|
:orphan:
.. _testing_validate-modules:
****************
validate-modules
****************
.. contents:: Topics
Python program to help test or validate Ansible modules.
``validate-modules`` is one of the ``ansible-test`` Sanity Tests, see :ref:`testing_sanity` for more information.
Originally developed by Matt Martz (@sivel)
Usage
=====
.. code:: shell
cd /path/to/ansible/source
source hacking/env-setup
ansible-test sanity --test validate-modules
Help
====
.. code:: shell
usage: validate-modules [-h] [-w] [--exclude EXCLUDE] [--arg-spec]
[--base-branch BASE_BRANCH] [--format {json,plain}]
[--output OUTPUT]
modules [modules ...]
positional arguments:
modules Path to module or module directory
optional arguments:
-h, --help show this help message and exit
-w, --warnings Show warnings
--exclude EXCLUDE RegEx exclusion pattern
--arg-spec Analyze module argument spec
--base-branch BASE_BRANCH
Used in determining if new options were added
--format {json,plain}
Output format. Default: "plain"
--output OUTPUT Output location, use "-" for stdout. Default "-"
Extending validate-modules
==========================
The ``validate-modules`` tool has a `schema.py <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/schema.py>`_ that is used to validate the YAML blocks, such as ``DOCUMENTATION`` and ``RETURNS``.
Codes
=====
============================================================ ================== ==================== =========================================================================================
**Error Code** **Type** **Level** **Sample Message**
------------------------------------------------------------ ------------------ -------------------- -----------------------------------------------------------------------------------------
ansible-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier Ansible version
collection-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier collection version
ansible-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier Ansible version
ansible-module-not-initialized Syntax Error Execution of the module did not result in initialization of AnsibleModule
collection-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier collection version
deprecated-date Documentation Error A date before today appears as ``removed_at_date`` or in ``deprecated_aliases``
deprecation-mismatch Documentation Error Module marked as deprecated or removed in at least one of the filename, its metadata, or in DOCUMENTATION (setting DOCUMENTATION.deprecated for deprecation or removing all Documentation for removed) but not in all three places.
doc-choices-do-not-match-spec Documentation Error Value for "choices" from the argument_spec does not match the documentation
doc-choices-incompatible-type Documentation Error Choices value from the documentation is not compatible with type defined in the argument_spec
doc-default-does-not-match-spec Documentation Error Value for "default" from the argument_spec does not match the documentation
doc-default-incompatible-type Documentation Error Default value from the documentation is not compatible with type defined in the argument_spec
doc-elements-invalid Documentation Error Documentation specifies elements for argument, when "type" is not ``list``.
doc-elements-mismatch Documentation Error Argument_spec defines elements different than documentation does
doc-missing-type Documentation Error Documentation doesn't specify a type but argument in ``argument_spec`` use default type (``str``)
doc-required-mismatch Documentation Error argument in argument_spec is required but documentation says it is not, or vice versa
doc-type-does-not-match-spec Documentation Error Argument_spec defines type different than documentation does
documentation-error Documentation Error Unknown ``DOCUMENTATION`` error
documentation-syntax-error Documentation Error Invalid ``DOCUMENTATION`` schema
illegal-future-imports Imports Error Only the following ``from __future__`` imports are allowed: ``absolute_import``, ``division``, and ``print_function``.
import-before-documentation Imports Error Import found before documentation variables. All imports must appear below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
import-error Documentation Error ``Exception`` attempting to import module for ``argument_spec`` introspection
import-placement Locations Warning Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
imports-improper-location Imports Error Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
incompatible-choices Documentation Error Choices value from the argument_spec is not compatible with type defined in the argument_spec
incompatible-default-type Documentation Error Default value from the argument_spec is not compatible with type defined in the argument_spec
invalid-argument-name Documentation Error Argument in argument_spec must not be one of 'message', 'syslog_facility' as it is used internally by Ansible Core Engine
invalid-argument-spec Documentation Error Argument in argument_spec must be a dictionary/hash when used
invalid-argument-spec-options Documentation Error Suboptions in argument_spec are invalid
invalid-documentation Documentation Error ``DOCUMENTATION`` is not valid YAML
invalid-documentation-markup Documentation Error ``DOCUMENTATION`` or ``RETURN`` contains invalid markup
invalid-documentation-options Documentation Error ``DOCUMENTATION.options`` must be a dictionary/hash when used
invalid-examples Documentation Error ``EXAMPLES`` is not valid YAML
invalid-extension Naming Error Official Ansible modules must have a ``.py`` extension for python modules or a ``.ps1`` for powershell modules
invalid-module-schema Documentation Error ``AnsibleModule`` schema validation error
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a `semantic version <https://semver.org/>`_)
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
missing-documentation Documentation Error No ``DOCUMENTATION`` provided
missing-examples Documentation Error No ``EXAMPLES`` provided
missing-gplv3-license Documentation Error GPLv3 license header not found
missing-module-utils-basic-import Imports Warning Did not find ``ansible.module_utils.basic`` import
missing-module-utils-import-csharp-requirements Imports Error No ``Ansible.ModuleUtils`` or C# Ansible util requirements/imports found
missing-powershell-interpreter Syntax Error Interpreter line is not ``#!powershell``
missing-python-interpreter Syntax Error Interpreter line is not ``#!/usr/bin/python``
missing-return Documentation Error No ``RETURN`` documentation provided
missing-return-legacy Documentation Warning No ``RETURN`` documentation provided for legacy module
missing-suboption-docs Documentation Error Argument in argument_spec has sub-options but documentation does not define sub-options
module-incorrect-version-added Documentation Error Module level ``version_added`` is incorrect
module-invalid-version-added Documentation Error Module level ``version_added`` is not a valid version number
module-utils-specific-import Imports Error ``module_utils`` imports should import specific components, not ``*``
multiple-utils-per-requires Imports Error ``Ansible.ModuleUtils`` requirements do not support multiple modules per statement
multiple-csharp-utils-per-requires Imports Error Ansible C# util requirements do not support multiple utils per statement
no-default-for-required-parameter Documentation Error Option is marked as required but specifies a default. Arguments with a default should not be marked as required
no-log-needed Parameters Error Option name suggests that the option contains a secret value, while ``no_log`` is not specified for this option in the argument spec. If this is a false positive, explicitly set ``no_log=False``
nonexistent-parameter-documented Documentation Error Argument is listed in DOCUMENTATION.options, but not accepted by the module
option-incorrect-version-added Documentation Error ``version_added`` for new option is incorrect
option-invalid-version-added Documentation Error ``version_added`` for option is not a valid version number
parameter-invalid Documentation Error Argument in argument_spec is not a valid python identifier
parameter-invalid-elements Documentation Error Value for "elements" is valid only when value of "type" is ``list``
implied-parameter-type-mismatch Documentation Error Argument_spec implies ``type="str"`` but documentation defines it as different data type
parameter-type-not-in-doc Documentation Error Type value is defined in ``argument_spec`` but documentation doesn't specify a type
parameter-alias-repeated Parameters Error argument in argument_spec has at least one alias specified multiple times in aliases
parameter-alias-self Parameters Error argument in argument_spec is specified as its own alias
parameter-documented-multiple-times Documentation Error argument in argument_spec with aliases is documented multiple times
parameter-list-no-elements Parameters Error argument in argument_spec "type" is specified as ``list`` without defining "elements"
parameter-state-invalid-choice Parameters Error Argument ``state`` includes ``get``, ``list`` or ``info`` as a choice. Functionality should be in an ``_info`` or (if further conditions apply) ``_facts`` module.
python-syntax-error Syntax Error Python ``SyntaxError`` while parsing module
removal-version-must-be-major Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be removed are major versions (x.0.0)
return-syntax-error Documentation Error ``RETURN`` is not valid YAML, ``RETURN`` fragments missing or invalid
return-invalid-version-added Documentation Error ``version_added`` for return value is not a valid version number
subdirectory-missing-init Naming Error Ansible module subdirectories must contain an ``__init__.py``
try-except-missing-has Imports Warning Try/Except ``HAS_`` expression missing
undocumented-parameter Documentation Error Argument is listed in the argument_spec, but not documented in the module
unidiomatic-typecheck Syntax Error Type comparison using ``type()`` found. Use ``isinstance()`` instead
unknown-doc-fragment Documentation Warning Unknown pre-existing ``DOCUMENTATION`` error
use-boto3 Imports Error ``boto`` import found, new modules should use ``boto3``
use-fail-json-not-sys-exit Imports Error ``sys.exit()`` call found. Should be ``exit_json``/``fail_json``
use-module-utils-urls Imports Error ``requests`` import found, should use ``ansible.module_utils.urls`` instead
use-run-command-not-os-call Imports Error ``os.call`` used instead of ``module.run_command``
use-run-command-not-popen Imports Error ``subprocess.Popen`` used instead of ``module.run_command``
use-short-gplv3-license Documentation Error GPLv3 license header should be the :ref:`short form <copyright>` for new modules
mutually_exclusive-type Documentation Error mutually_exclusive entry contains non-string value
mutually_exclusive-collision Documentation Error mutually_exclusive entry has repeated terms
mutually_exclusive-unknown Documentation Error mutually_exclusive entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_one_of-type Documentation Error required_one_of entry contains non-string value
required_one_of-collision Documentation Error required_one_of entry has repeated terms
required_one_of-unknown Documentation Error required_one_of entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_together-type Documentation Error required_together entry contains non-string value
required_together-collision Documentation Error required_together entry has repeated terms
required_together-unknown Documentation Error required_together entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_if-is_one_of-type Documentation Error required_if entry has a fourth value which is not a bool
required_if-requirements-type Documentation Error required_if entry has a third value (requirements) which is not a list or tuple
required_if-requirements-collision Documentation Error required_if entry has repeated terms in requirements
required_if-requirements-unknown Documentation Error required_if entry's requirements contains option which does not appear in argument_spec (potentially an alias of an option?)
required_if-unknown-key Documentation Error required_if entry's key does not appear in argument_spec (potentially an alias of an option?)
required_if-key-in-requirements Documentation Error required_if entry contains its key in requirements list/tuple
required_if-value-type Documentation Error required_if entry's value is not of the type specified for its key
required_by-collision Documentation Error required_by entry has repeated terms
required_by-unknown Documentation Error required_by entry contains option which does not appear in argument_spec (potentially an alias of an option?)
version-added-must-be-major-or-minor Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be added are major and minor versions (x.y.0)
============================================================ ================== ==================== =========================================================================================
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,368 |
galaxy.yml `manifest: {}` excludes REUSE licenses by default
|
### Summary
Multiple community collections have adopted the REUSE licensing standard. The cornerstone of REUSE is a LICENSES directory in the repository root. Additionally, some of these collections have `*.license` files within sub-directories and `.reuse/dep5` file. Unfortunately, the manifest directives exclude *all* of those by default. It's *very* important that the default manifest directories include licensing information. This specification is well established across the OSS landscape, so I think it's reasonable to handle it by default.
### Issue Type
Bug Report
### Component Name
ansible-galaxy collection build
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gotmax/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/gotmax/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.8 (main, Nov 9 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = community.general.yaml
```
### OS / Environment
Fedora 36
### Steps to Reproduce
1. Download the community.general 6.0.0 sources
2. `ansible-galaxy build .`
3. Add `manifest: {}` to the galaxy.yml
4. Rebuild
5. Compare the outputs
### Expected Results
The aforementioned licensing related files listed above should be included. The default directives list should be changed to something like:
```
include meta/*.yml
include *.txt *.md *.rst *.license COPYING LICENS
recursive-include LICENSES **
recursive-include .reuse **
recursive-include tests **
recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license
recursive-include roles **.yml **.yaml **.json **.j2 **.license
recursive-include playbooks **.yml **.yaml **.json **.license
recursive-include changelogs **.yml **.yaml **.license
recursive-include plugins */**.py **.license
recursive-include plugins/become **.yml **.yaml **.license
recursive-include plugins/cache **.yml **.yaml **.license
recursive-include plugins/callback **.yml **.yaml **.license
recursive-include plugins/cliconf **.yml **.yaml **.license
recursive-include plugins/connection **.yml **.yaml **.license
recursive-include plugins/filter **.yml **.yaml **.license
recursive-include plugins/httpapi **.yml **.yaml **.license
recursive-include plugins/inventory **.yml **.yaml **.license
recursive-include plugins/lookup **.yml **.yaml **.license
recursive-include plugins/netconf **.yml **.yaml **.license
recursive-include plugins/shell **.yml **.yaml **.license
recursive-include plugins/strategy **.yml **.yaml **.license
recursive-include plugins/test **.yml **.yaml **.license
recursive-include plugins/vars **.yml **.yaml **.license
recursive-include plugins/modules **.ps1 **.yml **.yaml **.license
recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license
exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
recursive-exclude tests/output **
global-exclude /.* /__pycache__
```
---
``` diff
--- old
+++ new
@@ -1,27 +1,29 @@
include meta/*.yml
-include *.txt *.md *.rst COPYING LICENSE
+include *.txt *.md *.rst *.license COPYING LICENSE
+recursive-include LICENSES **
+recursive-include .reuse **
recursive-include tests **
-recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt
-recursive-include roles **.yml **.yaml **.json **.j2
-recursive-include playbooks **.yml **.yaml **.json
-recursive-include changelogs **.yml **.yaml
-recursive-include plugins */**.py
-recursive-include plugins/become **.yml **.yaml
-recursive-include plugins/cache **.yml **.yaml
-recursive-include plugins/callback **.yml **.yaml
-recursive-include plugins/cliconf **.yml **.yaml
-recursive-include plugins/connection **.yml **.yaml
-recursive-include plugins/filter **.yml **.yaml
-recursive-include plugins/httpapi **.yml **.yaml
-recursive-include plugins/inventory **.yml **.yaml
-recursive-include plugins/lookup **.yml **.yaml
-recursive-include plugins/netconf **.yml **.yaml
-recursive-include plugins/shell **.yml **.yaml
-recursive-include plugins/strategy **.yml **.yaml
-recursive-include plugins/test **.yml **.yaml
-recursive-include plugins/vars **.yml **.yaml
-recursive-include plugins/modules **.ps1 **.yml **.yaml
-recursive-include plugins/module_utils **.ps1 **.psm1 **.cs
+recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license
+recursive-include roles **.yml **.yaml **.json **.j2 **.license
+recursive-include playbooks **.yml **.yaml **.json **.license
+recursive-include changelogs **.yml **.yaml **.license
+recursive-include plugins */**.py **.license
+recursive-include plugins/become **.yml **.yaml **.license
+recursive-include plugins/cache **.yml **.yaml **.license
+recursive-include plugins/callback **.yml **.yaml **.license
+recursive-include plugins/cliconf **.yml **.yaml **.license
+recursive-include plugins/connection **.yml **.yaml **.license
+recursive-include plugins/filter **.yml **.yaml **.license
+recursive-include plugins/httpapi **.yml **.yaml **.license
+recursive-include plugins/inventory **.yml **.yaml **.license
+recursive-include plugins/lookup **.yml **.yaml **.license
+recursive-include plugins/netconf **.yml **.yaml **.license
+recursive-include plugins/shell **.yml **.yaml **.license
+recursive-include plugins/strategy **.yml **.yaml **.license
+recursive-include plugins/test **.yml **.yaml **.license
+recursive-include plugins/vars **.yml **.yaml **.license
+recursive-include plugins/modules **.ps1 **.yml **.yaml **.license
+recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license
# manifest.directives from galaxy.yml inserted here
exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
recursive-exclude tests/output **
```
### Actual Results
> diff <(tar tf ../6.0.0_manifest_comp/before/community-general-6.0.0.tar.gz|sort) <(tar tf ../6.0.0_manifest_comp/after/community-general-6.0.0.tar.gz | sort)
``` diff
1,15d0
< .azure-pipelines/
< .azure-pipelines/azure-pipelines.yml
< .azure-pipelines/README.md
< .azure-pipelines/scripts/
< .azure-pipelines/scripts/aggregate-coverage.sh
< .azure-pipelines/scripts/combine-coverage.py
< .azure-pipelines/scripts/process-results.sh
< .azure-pipelines/scripts/publish-codecov.py
< .azure-pipelines/scripts/report-coverage.sh
< .azure-pipelines/scripts/run-tests.sh
< .azure-pipelines/scripts/time-command.py
< .azure-pipelines/templates/
< .azure-pipelines/templates/coverage.yml
< .azure-pipelines/templates/matrix.yml
< .azure-pipelines/templates/test.yml
17d1
< CHANGELOG.rst.license
20d3
< changelogs/changelog.yaml.license
22,24d4
< changelogs/fragments/
< changelogs/fragments/.keep
< changelogs/.gitignore
89,108d68
< .github/
< .github/BOTMETA.yml
< .github/dependabot.yml
< .github/ISSUE_TEMPLATE/
< .github/ISSUE_TEMPLATE/bug_report.yml
< .github/ISSUE_TEMPLATE/config.yml
< .github/ISSUE_TEMPLATE/documentation_report.yml
< .github/ISSUE_TEMPLATE/feature_request.yml
< .github/patchback.yml
< .github/settings.yml
< .github/workflows/
< .github/workflows/codeql-analysis.yml
< .github/workflows/docs-pr.yml
< .github/workflows/reuse.yml
< .gitignore
< LICENSES/
< LICENSES/BSD-2-Clause.txt
< LICENSES/GPL-3.0-or-later.txt
< LICENSES/MIT.txt
< LICENSES/PSF-2.0.txt
922d881
< .pre-commit-config.yaml
924,925d882
< .reuse/
< .reuse/dep5
928d884
< tests/.gitignore
1114,1115d1069
< tests/integration/targets/django_manage/files/base_test/startproj/
< tests/integration/targets/django_manage/files/base_test/startproj/.keep
2512d2465
< tests/integration/targets/terraform/.gitignore
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79368
|
https://github.com/ansible/ansible/pull/79403
|
a954918b6095adf52c663bdcc340c55762189393
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
| 2022-11-12T22:04:28Z |
python
| 2022-11-17T23:13:01Z |
changelogs/fragments/79368-galaxy-manifest-reuse-licenses.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,368 |
galaxy.yml `manifest: {}` excludes REUSE licenses by default
|
### Summary
Multiple community collections have adopted the REUSE licensing standard. The cornerstone of REUSE is a LICENSES directory in the repository root. Additionally, some of these collections have `*.license` files within sub-directories and `.reuse/dep5` file. Unfortunately, the manifest directives exclude *all* of those by default. It's *very* important that the default manifest directories include licensing information. This specification is well established across the OSS landscape, so I think it's reasonable to handle it by default.
### Issue Type
Bug Report
### Component Name
ansible-galaxy collection build
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gotmax/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.10/site-packages/ansible
ansible collection location = /home/gotmax/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.8 (main, Nov 9 2022, 00:00:00) [GCC 12.2.1 20220819 (Red Hat 12.2.1-2)] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_NOCOWS(env: ANSIBLE_NOCOWS) = True
CONFIG_FILE() = /etc/ansible/ansible.cfg
DEFAULT_STDOUT_CALLBACK(env: ANSIBLE_STDOUT_CALLBACK) = community.general.yaml
```
### OS / Environment
Fedora 36
### Steps to Reproduce
1. Download the community.general 6.0.0 sources
2. `ansible-galaxy build .`
3. Add `manifest: {}` to the galaxy.yml
4. Rebuild
5. Compare the outputs
### Expected Results
The aforementioned licensing related files listed above should be included. The default directives list should be changed to something like:
```
include meta/*.yml
include *.txt *.md *.rst *.license COPYING LICENS
recursive-include LICENSES **
recursive-include .reuse **
recursive-include tests **
recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license
recursive-include roles **.yml **.yaml **.json **.j2 **.license
recursive-include playbooks **.yml **.yaml **.json **.license
recursive-include changelogs **.yml **.yaml **.license
recursive-include plugins */**.py **.license
recursive-include plugins/become **.yml **.yaml **.license
recursive-include plugins/cache **.yml **.yaml **.license
recursive-include plugins/callback **.yml **.yaml **.license
recursive-include plugins/cliconf **.yml **.yaml **.license
recursive-include plugins/connection **.yml **.yaml **.license
recursive-include plugins/filter **.yml **.yaml **.license
recursive-include plugins/httpapi **.yml **.yaml **.license
recursive-include plugins/inventory **.yml **.yaml **.license
recursive-include plugins/lookup **.yml **.yaml **.license
recursive-include plugins/netconf **.yml **.yaml **.license
recursive-include plugins/shell **.yml **.yaml **.license
recursive-include plugins/strategy **.yml **.yaml **.license
recursive-include plugins/test **.yml **.yaml **.license
recursive-include plugins/vars **.yml **.yaml **.license
recursive-include plugins/modules **.ps1 **.yml **.yaml **.license
recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license
exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
recursive-exclude tests/output **
global-exclude /.* /__pycache__
```
---
``` diff
--- old
+++ new
@@ -1,27 +1,29 @@
include meta/*.yml
-include *.txt *.md *.rst COPYING LICENSE
+include *.txt *.md *.rst *.license COPYING LICENSE
+recursive-include LICENSES **
+recursive-include .reuse **
recursive-include tests **
-recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt
-recursive-include roles **.yml **.yaml **.json **.j2
-recursive-include playbooks **.yml **.yaml **.json
-recursive-include changelogs **.yml **.yaml
-recursive-include plugins */**.py
-recursive-include plugins/become **.yml **.yaml
-recursive-include plugins/cache **.yml **.yaml
-recursive-include plugins/callback **.yml **.yaml
-recursive-include plugins/cliconf **.yml **.yaml
-recursive-include plugins/connection **.yml **.yaml
-recursive-include plugins/filter **.yml **.yaml
-recursive-include plugins/httpapi **.yml **.yaml
-recursive-include plugins/inventory **.yml **.yaml
-recursive-include plugins/lookup **.yml **.yaml
-recursive-include plugins/netconf **.yml **.yaml
-recursive-include plugins/shell **.yml **.yaml
-recursive-include plugins/strategy **.yml **.yaml
-recursive-include plugins/test **.yml **.yaml
-recursive-include plugins/vars **.yml **.yaml
-recursive-include plugins/modules **.ps1 **.yml **.yaml
-recursive-include plugins/module_utils **.ps1 **.psm1 **.cs
+recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt **.license
+recursive-include roles **.yml **.yaml **.json **.j2 **.license
+recursive-include playbooks **.yml **.yaml **.json **.license
+recursive-include changelogs **.yml **.yaml **.license
+recursive-include plugins */**.py **.license
+recursive-include plugins/become **.yml **.yaml **.license
+recursive-include plugins/cache **.yml **.yaml **.license
+recursive-include plugins/callback **.yml **.yaml **.license
+recursive-include plugins/cliconf **.yml **.yaml **.license
+recursive-include plugins/connection **.yml **.yaml **.license
+recursive-include plugins/filter **.yml **.yaml **.license
+recursive-include plugins/httpapi **.yml **.yaml **.license
+recursive-include plugins/inventory **.yml **.yaml **.license
+recursive-include plugins/lookup **.yml **.yaml **.license
+recursive-include plugins/netconf **.yml **.yaml **.license
+recursive-include plugins/shell **.yml **.yaml **.license
+recursive-include plugins/strategy **.yml **.yaml **.license
+recursive-include plugins/test **.yml **.yaml **.license
+recursive-include plugins/vars **.yml **.yaml **.license
+recursive-include plugins/modules **.ps1 **.yml **.yaml **.license
+recursive-include plugins/module_utils **.ps1 **.psm1 **.cs **.license
# manifest.directives from galaxy.yml inserted here
exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
recursive-exclude tests/output **
```
### Actual Results
> diff <(tar tf ../6.0.0_manifest_comp/before/community-general-6.0.0.tar.gz|sort) <(tar tf ../6.0.0_manifest_comp/after/community-general-6.0.0.tar.gz | sort)
``` diff
1,15d0
< .azure-pipelines/
< .azure-pipelines/azure-pipelines.yml
< .azure-pipelines/README.md
< .azure-pipelines/scripts/
< .azure-pipelines/scripts/aggregate-coverage.sh
< .azure-pipelines/scripts/combine-coverage.py
< .azure-pipelines/scripts/process-results.sh
< .azure-pipelines/scripts/publish-codecov.py
< .azure-pipelines/scripts/report-coverage.sh
< .azure-pipelines/scripts/run-tests.sh
< .azure-pipelines/scripts/time-command.py
< .azure-pipelines/templates/
< .azure-pipelines/templates/coverage.yml
< .azure-pipelines/templates/matrix.yml
< .azure-pipelines/templates/test.yml
17d1
< CHANGELOG.rst.license
20d3
< changelogs/changelog.yaml.license
22,24d4
< changelogs/fragments/
< changelogs/fragments/.keep
< changelogs/.gitignore
89,108d68
< .github/
< .github/BOTMETA.yml
< .github/dependabot.yml
< .github/ISSUE_TEMPLATE/
< .github/ISSUE_TEMPLATE/bug_report.yml
< .github/ISSUE_TEMPLATE/config.yml
< .github/ISSUE_TEMPLATE/documentation_report.yml
< .github/ISSUE_TEMPLATE/feature_request.yml
< .github/patchback.yml
< .github/settings.yml
< .github/workflows/
< .github/workflows/codeql-analysis.yml
< .github/workflows/docs-pr.yml
< .github/workflows/reuse.yml
< .gitignore
< LICENSES/
< LICENSES/BSD-2-Clause.txt
< LICENSES/GPL-3.0-or-later.txt
< LICENSES/MIT.txt
< LICENSES/PSF-2.0.txt
922d881
< .pre-commit-config.yaml
924,925d882
< .reuse/
< .reuse/dep5
928d884
< tests/.gitignore
1114,1115d1069
< tests/integration/targets/django_manage/files/base_test/startproj/
< tests/integration/targets/django_manage/files/base_test/startproj/.keep
2512d2465
< tests/integration/targets/terraform/.gitignore
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79368
|
https://github.com/ansible/ansible/pull/79403
|
a954918b6095adf52c663bdcc340c55762189393
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
| 2022-11-12T22:04:28Z |
python
| 2022-11-17T23:13:01Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils.six import raise_from
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise_from(AnsibleError(to_native(lookup_err)), lookup_err)
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
offline=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
offline, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
offline=offline,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if not signatures and not collection.signature_sources:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns, manifest_control):
# type: (bytes, str, str, list[str], dict[str, t.Any]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control):
# type: (bytes, str, str, dict[str, t.Any]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst COPYING LICENSE',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt',
'recursive-include roles **.yml **.yaml **.json **.j2',
'recursive-include playbooks **.yml **.yaml **.json',
'recursive-include changelogs **.yml **.yaml',
'recursive-include plugins */**.py',
])
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file, follow_symlinks=False).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
# ensure symlinks to dirs are not translated to empty dirs
if os.path.isdir(src_file) and not os.path.islink(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
# do not follow symlinks to ensure the original link is used
shutil.copyfile(src_file, dest_file, follow_symlinks=False)
# avoid setting specific permission on symlinks since it does not
# support avoid following symlinks and will thrown an exception if the
# symlink target does not exist
if not os.path.islink(dest_file):
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def find_existing_collections(path, artifacts_manager):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
b_path = to_bytes(path, errors='surrogate_or_strict')
# FIXME: consider using `glob.glob()` to simplify looping
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
# FIXME: consider feeding b_namespace_path to Candidate.from_dir_path to get subdirs automatically
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if not os.path.isdir(b_collection_path):
continue
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
raise_from(AnsibleError(val_err), val_err)
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
offline, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
offline=offline,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
examples/plugin_filters.yml
|
---
filter_version: '1.0'
module_blacklist:
# List the modules to blacklist here
#- easy_install
#- s3
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import pkgutil
import sys
import warnings
from collections import defaultdict, namedtuple
from traceback import format_exc
from ansible import __version__ as ansible_version
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.compat.importlib import import_module
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments, find_plugin_docfile
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None # type: ignore[misc]
Version = None # type: ignore[misc]
import importlib.util
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
self.action_plugin = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason, action_plugin):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
self.action_plugin = action_plugin
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
@property
def type(self):
return AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
def __repr__(self):
return 'PluginLoader(type={0})'.format(self.type)
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS and not C.config.has_configuration_definition(type_name, name):
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
# TODO: allow configurable plugins to use sidecar
# if not dstring:
# filename, cn = find_plugin_docfile( name, type_name, self, [os.path.dirname(path)], C.YAML_DOC_EXTENSIONS)
# # TODO: dstring = AnsibleLoader(, file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
action_plugin = None
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# Prevent mystery redirects that would be determined by the collections keyword
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {fq_name}: {redirect}. "
"Redirects must use fully qualified collection names."
)
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
if self.type == 'modules':
action_plugin = routing_metadata.get('action_plugin')
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
found_files = sorted(found_files) # sort to ensure deterministic results, with the shortest match first
if len(found_files) > 1:
display.debug('Found several possible candidates for the plugin but using first: %s' % ','.join(found_files))
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection,
'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.removeprefix('ansible.legacy.'),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = ('ansible.builtin.' + name if path_with_context.internal else name)
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + name if path_with_context.internal else name
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + alias_name if path_with_context.internal else alias_name
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context, ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
# FIXME: this still has issues if the module was previously imported but not "cached",
# we should bypass this entire codepath for things that are directly importable
warnings.simplefilter("ignore", RuntimeWarning)
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
# mimic import machinery; make the module-being-loaded available in sys.modules during import
# and remove if there's a failure...
sys.modules[full_name] = module
try:
spec.loader.exec_module(module)
except Exception:
del sys.modules[full_name]
raise
return module
def _update_object(self, obj, name, path, redirected_names=None, resolved=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
names = []
if resolved:
names.append(resolved)
if redirected_names:
# reverse list so best name comes first
names.extend(redirected_names[::-1])
if not names:
raise AnsibleError(f"Missing FQCN for plugin source {name}")
setattr(obj, 'ansible_aliases', names)
setattr(obj, 'ansible_name', names[0])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
fq_name = plugin_load_context.resolved_fqcn
if '.' not in fq_name:
fq_name = '.'.join((plugin_load_context.plugin_resolved_collection, fq_name))
name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
found_in_cache = False
self._load_config_defs(name, self._module_cache[path], path)
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, name, path, redirected_names, fq_name)
obj.__init__(instance, *args, **kwargs) # pylint: disable=unnecessary-dunder-call
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class or incomplete plugin, don't load
display.v('Returning not found on "%s" as it has unimplemented abstract methods; %s' % (name, to_native(e)))
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, name, path, redirected_names, fq_name)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type, in configured paths (no collections)
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
legacy_excluding_builtin = set()
for path_with_context in self._get_paths_with_context():
matches = glob.glob(to_native(os.path.join(path_with_context.path, "*.py")))
if not path_with_context.internal:
legacy_excluding_builtin.update(matches)
# we sort within each path, but keep path precedence from config
all_matches.extend(sorted(matches, key=os.path.basename))
loaded_modules = set()
for path in all_matches:
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename in _PLUGIN_FILTERS[self.package]:
display.debug("'%s' skipped due to a defined plugin filter" % basename)
continue
if basename == '__init__' or (basename == 'base' and self.package == 'ansible.plugins.cache'):
# cache has legacy 'base.py' file, which is wrapper for __init__.py
display.debug("'%s' skipped due to reserved name" % basename)
continue
if dedupe and basename in loaded_modules:
display.debug("'%s' skipped as duplicate" % basename)
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
if self.type in ('filter', 'test'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
try:
module = self._load_module_source(full_name, path)
except Exception as e:
display.warning("Skipping plugin (%s), cannot load: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
else:
module = self._module_cache[path]
self._load_config_defs(basename, module, path)
try:
obj = getattr(module, self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
if path in legacy_excluding_builtin:
fqcn = basename
else:
fqcn = f"ansible.builtin.{basename}"
self._update_object(obj, basename, path, resolved=fqcn)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
We need to do a few things differently in the base class because of file == plugin
assumptions and dedupe logic.
"""
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
super(Jinja2Loader, self).__init__(class_name, package, config, subdir, aliases=aliases, required_base_class=required_base_class)
self._loaded_j2_file_maps = []
def _clear_caches(self):
super(Jinja2Loader, self)._clear_caches()
self._loaded_j2_file_maps = []
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
# TODO: handle collection plugin find, see 'get_with_context'
# this can really 'find plugin file'
plugin = super(Jinja2Loader, self).find_plugin(name, mod_type=mod_type, ignore_deprecated=ignore_deprecated, check_aliases=check_aliases,
collection_list=collection_list)
# if not found, try loading all non collection plugins and see if this in there
if not plugin:
all_plugins = self.all()
plugin = all_plugins.get(name, None)
return plugin
@property
def method_map_name(self):
return get_plugin_class(self.class_name) + 's'
def get_contained_plugins(self, collection, plugin_path, name):
plugins = []
full_name = '.'.join(['ansible_collections', collection, 'plugins', self.type, name])
try:
# use 'parent' loader class to find files, but cannot return this as it can contain multiple plugins per file
if plugin_path not in self._module_cache:
self._module_cache[plugin_path] = self._load_module_source(full_name, plugin_path)
module = self._module_cache[plugin_path]
obj = getattr(module, self.class_name)
except Exception as e:
raise KeyError('Failed to load %s for %s: %s' % (plugin_path, collection, to_native(e)))
plugin_impl = obj()
if plugin_impl is None:
raise KeyError('Could not find %s.%s' % (collection, name))
try:
method_map = getattr(plugin_impl, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Ignoring %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_path), e))
return plugins
for func_name, func in plugin_map:
fq_name = '.'.join((collection, func_name))
full = '.'.join((full_name, func_name))
pclass = self._load_jinja2_class()
plugin = pclass(func)
if plugin in plugins:
continue
self._update_object(plugin, full, plugin_path, resolved=fq_name)
plugins.append(plugin)
return plugins
def get_with_context(self, name, *args, **kwargs):
# found_in_cache = True
class_only = kwargs.pop('class_only', False) # just pop it, dont want to pass through
collection_list = kwargs.pop('collection_list', None)
context = PluginLoadContext()
# avoid collection path for legacy
name = name.removeprefix('ansible.legacy.')
if '.' not in name:
# Filter/tests must always be FQCN except builtin and legacy
for known_plugin in self.all(*args, **kwargs):
if known_plugin.matches_name([name]):
context.resolved = True
context.plugin_resolved_name = name
context.plugin_resolved_path = known_plugin._original_path
context.plugin_resolved_collection = 'ansible.builtin' if known_plugin.ansible_name.startswith('ansible.builtin.') else ''
context._resolved_fqcn = known_plugin.ansible_name
return get_with_context_result(known_plugin, context)
plugin = None
key, leaf_key = get_fqcr_and_name(name)
seen = set()
# follow the meta!
while True:
if key in seen:
raise AnsibleError('recursive collection redirect found for %r' % name, 0)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self.type)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
try:
ts = _get_collection_metadata(acr.collection)
except ValueError as e:
# no collection
raise KeyError('Invalid plugin FQCN ({0}): {1}'.format(key, to_native(e)))
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self.type, {}).get(leaf_key, {})
# check deprecations
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self.type, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
# check removal
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self.type, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
# check redirects
redirect = routing_entry.get('redirect', None)
if redirect:
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {acr.collection}.{acr.resource}: {redirect}. "
"Redirects must use fully qualified collection names."
)
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self.type, acr.collection, acr.resource, next_key))
key = next_key
else:
break
try:
pkg = import_module(acr.n_python_package_name)
except ImportError as e:
raise KeyError(to_native(e))
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
try:
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
# use 'parent' loader class to find files, but cannot return this as it can contain
# multiple plugins per file
plugin_impl = super(Jinja2Loader, self).get_with_context(module_name, *args, **kwargs)
except Exception as e:
raise KeyError(to_native(e))
try:
method_map = getattr(plugin_impl.object, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_impl.object._original_path), e))
continue
for func_name, func in plugin_map:
fq_name = '.'.join((parent_prefix, func_name))
src_name = f"ansible_collections.{acr.collection}.plugins.{self.type}.{acr.subdirs}.{func_name}"
# TODO: load anyways into CACHE so we only match each at end of loop
# the files themseves should already be cached by base class caching of modules(python)
if key in (func_name, fq_name):
pclass = self._load_jinja2_class()
plugin = pclass(func)
if plugin:
context = plugin_impl.plugin_load_context
self._update_object(plugin, src_name, plugin_impl.object._original_path, resolved=fq_name)
break # go to next file as it can override if dupe (dont break both loops)
except AnsiblePluginRemovedError as apre:
raise AnsibleError(to_native(apre), 0, orig_exc=apre)
except (AnsibleError, KeyError):
raise
except Exception as ex:
display.warning('An unexpected error occurred during Jinja2 plugin loading: {0}'.format(to_native(ex)))
display.vvv('Unexpected error during Jinja2 plugin loading: {0}'.format(format_exc()))
raise AnsibleError(to_native(ex), 0, orig_exc=ex)
return get_with_context_result(plugin, context)
def all(self, *args, **kwargs):
# inputs, we ignore 'dedupe' we always do, used in base class to find files for this one
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False) # basically ignored for test/filters since they are functions
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
found = set()
# get plugins from files in configured paths (multiple in each)
for p_map in self._j2_all_file_maps(*args, **kwargs):
# p_map is really object from file with class that holds multiple plugins
plugins_list = getattr(p_map, self.method_map_name)
try:
plugins = plugins_list()
except Exception as e:
display.vvvv("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(p_map._original_path), e))
continue
for plugin_name in plugins.keys():
if plugin_name in _PLUGIN_FILTERS[self.package]:
display.debug("%s skipped due to a defined plugin filter" % plugin_name)
continue
if plugin_name in found:
display.debug("%s skipped as duplicate" % plugin_name)
continue
if path_only:
result = p_map._original_path
else:
# loader class is for the file with multiple plugins, but each plugin now has it's own class
pclass = self._load_jinja2_class()
result = pclass(plugins[plugin_name]) # if bad plugin, let exception rise
found.add(plugin_name)
fqcn = plugin_name
collection = '.'.join(p_map.ansible_name.split('.')[:2]) if p_map.ansible_name.count('.') >= 2 else ''
if not plugin_name.startswith(collection):
fqcn = f"{collection}.{plugin_name}"
self._update_object(result, plugin_name, p_map._original_path, resolved=fqcn)
yield result
def _load_jinja2_class(self):
""" override the normal method of plugin classname as these are used in the generic funciton
to access the 'multimap' of filter/tests to function, this is a 'singular' plugin for
each entry.
"""
class_name = 'AnsibleJinja2%s' % get_plugin_class(self.class_name).capitalize()
module = __import__(self.package, fromlist=[class_name])
return getattr(module, class_name)
def _j2_all_file_maps(self, *args, **kwargs):
"""
* Unlike other plugin types, file != plugin, a file can contain multiple plugins (of same type).
This is why we do not deduplicate ansible file names at this point, we mostly care about
the names of the actual jinja2 plugins which are inside of our files.
* This method will NOT fetch collection plugin files, only those that would be expected under 'ansible.builtin/legacy'.
"""
# populate cache if needed
if not self._loaded_j2_file_maps:
# We don't deduplicate ansible file names.
# Instead, calling code deduplicates jinja2 plugin names when loading each file.
kwargs['_dedupe'] = False
# To match correct precedence, call base class' all() to get a list of files,
self._loaded_j2_file_maps = list(super(Jinja2Loader, self).all(*args, **kwargs))
return self._loaded_j2_file_maps
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader():
if AnsibleCollectionConfig.collection_finder:
# this must be a Python warning so that it can be filtered out by the import sanity test
warnings.warn('AnsibleCollectionFinder has already been configured')
return
finder = _AnsibleCollectionFinder(C.COLLECTIONS_PATHS, C.COLLECTIONS_SCAN_SYS_PATH)
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/filter_lookup.yml
|
---
filter_version: 1.0
module_blacklist:
# Specify the name of a lookup plugin here. This should have no effect as
# this is only for filtering modules
- list
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/filter_modules.yml
|
---
filter_version: 1.0
module_blacklist:
# A pure action plugin
- pause
# A hybrid action plugin with module
- copy
# A pure module
- tempfile
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/filter_ping.yml
|
---
filter_version: 1.0
module_blacklist:
# Ping is special
- ping
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/filter_stat.yml
|
---
filter_version: 1.0
module_blacklist:
# Stat is special
- stat
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/no_blacklist_module.ini
|
[defaults]
retry_files_enabled = False
plugin_filters_cfg = ./no_blacklist_module.yml
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/no_rejectlist_module.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,390 |
Wrong categorie in plugin filter configuration section
|
### Summary
Hello!
I was reading the ansible documentation latest and 2.8 to understand how to create a plugin_filters.yml
In the documentation, I found the `module_rejectlist` keywork to list unwanted module by I have get some errors:
plugin_filters.yml:
```yaml
---
filter_version: '1.0'
module_rejectlist:
# Deprecated
- docker
# We only allow pip, not easy_install
- easy_install
```
Output got:
```bash
$> ansible-playbook connection.yaml
Traceback (most recent call last):
File "/home/master/.local/bin//ansible-playbook", line 5, in <module>
from ansible.cli.playbook import main
File "/home/master/.local/lib/python3.8/site-packages/ansible/cli/__init__.py", line 52, in <module>
from ansible.inventory.manager import InventoryManager
File "/home/master/.local/lib/python3.8/site-packages/ansible/inventory/manager.py", line 38, in <module>
from ansible.plugins.loader import inventory_loader
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1187, in <module>
_PLUGIN_FILTERS = _load_plugin_filter()
File "/home/master/.local/lib/python3.8/site-packages/ansible/plugins/loader.py", line 1112, in _load_plugin_filter
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
KeyError: 'module_blacklist'
```
In fact, after some research, I understood that I should have a 'module_blacklist' list, and not 'module_rejectlist'
Maybe you could change this in the documentation :)
Have a nice day
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/user_guide/plugin_filtering_config.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = /home/master/Digora/Automation/ansible.cfg
configured module search path = ['/usr/share/my_modules']
ansible python module location = /home/master/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/master/.ansible/collections:/usr/share/ansible/collections
executable location = /home/master/.local/bin//ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
COLOR_CHANGED(/home/master/Digora/Automation/ansible.cfg) = yellow
COLOR_DEBUG(/home/master/Digora/Automation/ansible.cfg) = dark gray
COLOR_DEPRECATE(/home/master/Digora/Automation/ansible.cfg) = purple
COLOR_DIFF_ADD(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_DIFF_LINES(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_DIFF_REMOVE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_ERROR(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_HIGHLIGHT(/home/master/Digora/Automation/ansible.cfg) = white
COLOR_OK(/home/master/Digora/Automation/ansible.cfg) = green
COLOR_SKIP(/home/master/Digora/Automation/ansible.cfg) = cyan
COLOR_UNREACHABLE(/home/master/Digora/Automation/ansible.cfg) = red
COLOR_VERBOSE(/home/master/Digora/Automation/ansible.cfg) = blue
COLOR_WARN(/home/master/Digora/Automation/ansible.cfg) = bright purple
DEFAULT_ASK_PASS(/home/master/Digora/Automation/ansible.cfg) = True
DEFAULT_EXECUTABLE(/home/master/Digora/Automation/ansible.cfg) = /bin/sh
DEFAULT_FORKS(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_GATHERING(/home/master/Digora/Automation/ansible.cfg) = explicit
DEFAULT_HOST_LIST(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/hosts']
DEFAULT_LOCAL_TMP(/home/master/Digora/Automation/ansible.cfg) = /home/master/.ansible/tmp/ansible-local-1608385bihgyh
DEFAULT_LOG_PATH(/home/master/Digora/Automation/ansible.cfg) = /var/log/ansible.log
DEFAULT_MANAGED_STR(/home/master/Digora/Automation/ansible.cfg) = /!\ Generate by Ansible. Do not edit this file manually. All change will be lost /!\
DEFAULT_MODULE_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_modules']
DEFAULT_MODULE_UTILS_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/usr/share/my_module_utils']
DEFAULT_NO_LOG(/home/master/Digora/Automation/ansible.cfg) = False
DEFAULT_POLL_INTERVAL(/home/master/Digora/Automation/ansible.cfg) = 15
DEFAULT_REMOTE_PORT(/home/master/Digora/Automation/ansible.cfg) = 22
DEFAULT_REMOTE_USER(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
DEFAULT_ROLES_PATH(/home/master/Digora/Automation/ansible.cfg) = ['/etc/ansible/roles,./roles']
DEFAULT_STRATEGY(/home/master/Digora/Automation/ansible.cfg) = free
DEFAULT_TIMEOUT(/home/master/Digora/Automation/ansible.cfg) = 10
DEFAULT_TRANSPORT(/home/master/Digora/Automation/ansible.cfg) = smart
DEPRECATION_WARNINGS(/home/master/Digora/Automation/ansible.cfg) = True
DIFF_ALWAYS(/home/master/Digora/Automation/ansible.cfg) = False
DISPLAY_SKIPPED_HOSTS(/home/master/Digora/Automation/ansible.cfg) = True
PARAMIKO_LOOK_FOR_KEYS(/home/master/Digora/Automation/ansible.cfg) = False
PLUGIN_FILTERS_CFG(/home/master/Digora/Automation/ansible.cfg) = /etc/ansible/plugin_filters.yml
CALLBACK:
========
default:
_______
display_skipped_hosts(/home/master/Digora/Automation/ansible.cfg) = True
CONNECTION:
==========
paramiko_ssh:
____________
look_for_keys(/home/master/Digora/Automation/ansible.cfg) = False
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
ssh:
___
port(/home/master/Digora/Automation/ansible.cfg) = 22
reconnection_retries(/home/master/Digora/Automation/ansible.cfg) = 5
remote_user(/home/master/Digora/Automation/ansible.cfg) = digora-ansible
timeout(/home/master/Digora/Automation/ansible.cfg) = 10
SHELL:
=====
sh:
__
remote_tmp(/home/master/Digora/Automation/ansible.cfg) = ~/.ansible/tmp
```
### OS / Environment
$> cat /etc/os-release
NAME="Ubuntu"
VERSION="20.04.5 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.5 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal
### Additional Information
No Additional Information
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79390
|
https://github.com/ansible/ansible/pull/79391
|
942bcf6e7a911430694e08dd604d62576ca7d6f2
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
| 2022-11-16T13:33:43Z |
python
| 2022-11-18T19:26:35Z |
test/integration/targets/plugin_filtering/runme.sh
|
#!/usr/bin/env bash
set -ux
#
# Check that with no filters set, all of these modules run as expected
#
ANSIBLE_CONFIG=no_filters.ini ansible-playbook copy.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run copy with no filters applied"
exit 1
fi
ANSIBLE_CONFIG=no_filters.ini ansible-playbook pause.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run pause with no filters applied"
exit 1
fi
ANSIBLE_CONFIG=no_filters.ini ansible-playbook tempfile.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run tempfile with no filters applied"
exit 1
fi
#
# Check that if no modules are blacklisted then Ansible should not through traceback
#
ANSIBLE_CONFIG=no_blacklist_module.ini ansible-playbook tempfile.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run tempfile with no modules blacklisted"
exit 1
fi
#
# Check that with these modules filtered out, all of these modules fail to be found
#
ANSIBLE_CONFIG=filter_modules.ini ansible-playbook copy.yml -i ../../inventory -v "$@"
if test $? = 0 ; then
echo "### Failed to prevent copy from running"
exit 1
else
echo "### Copy was prevented from running as expected"
fi
ANSIBLE_CONFIG=filter_modules.ini ansible-playbook pause.yml -i ../../inventory -v "$@"
if test $? = 0 ; then
echo "### Failed to prevent pause from running"
exit 1
else
echo "### pause was prevented from running as expected"
fi
ANSIBLE_CONFIG=filter_modules.ini ansible-playbook tempfile.yml -i ../../inventory -v "$@"
if test $? = 0 ; then
echo "### Failed to prevent tempfile from running"
exit 1
else
echo "### tempfile was prevented from running as expected"
fi
#
# ping is a special module as we test for its existence. Check it specially
#
# Check that ping runs with no filter
ANSIBLE_CONFIG=no_filters.ini ansible-playbook ping.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run ping with no filters applied"
exit 1
fi
# Check that other modules run with ping filtered
ANSIBLE_CONFIG=filter_ping.ini ansible-playbook copy.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run copy when a filter was applied to ping"
exit 1
fi
# Check that ping fails to run when it is filtered
ANSIBLE_CONFIG=filter_ping.ini ansible-playbook ping.yml -i ../../inventory -v "$@"
if test $? = 0 ; then
echo "### Failed to prevent ping from running"
exit 1
else
echo "### Ping was prevented from running as expected"
fi
#
# Check that specifying a lookup plugin in the filter has no effect
#
ANSIBLE_CONFIG=filter_lookup.ini ansible-playbook lookup.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to use a lookup plugin when it is incorrectly specified in the *module* blacklist"
exit 1
fi
#
# stat is a special module as we use it to run nearly every other module. Check it specially
#
# Check that stat runs with no filter
ANSIBLE_CONFIG=no_filters.ini ansible-playbook stat.yml -i ../../inventory -vvv "$@"
if test $? != 0 ; then
echo "### Failed to run stat with no filters applied"
exit 1
fi
# Check that running another module when stat is filtered gives us our custom error message
ANSIBLE_CONFIG=filter_stat.ini
export ANSIBLE_CONFIG
CAPTURE=$(ansible-playbook copy.yml -i ../../inventory -vvv "$@" 2>&1)
if test $? = 0 ; then
echo "### Copy ran even though stat is in the module blacklist"
exit 1
else
echo "$CAPTURE" | grep 'The stat module was specified in the module blacklist file,.*, but Ansible will not function without the stat module. Please remove stat from the blacklist.'
if test $? != 0 ; then
echo "### Stat did not give us our custom error message"
exit 1
fi
echo "### Filtering stat failed with our custom error message as expected"
fi
unset ANSIBLE_CONFIG
# Check that running stat when stat is filtered gives our custom error message
ANSIBLE_CONFIG=filter_stat.ini
export ANSIBLE_CONFIG
CAPTURE=$(ansible-playbook stat.yml -i ../../inventory -vvv "$@" 2>&1)
if test $? = 0 ; then
echo "### Stat ran even though it is in the module blacklist"
exit 1
else
echo "$CAPTURE" | grep 'The stat module was specified in the module blacklist file,.*, but Ansible will not function without the stat module. Please remove stat from the blacklist.'
if test $? != 0 ; then
echo "### Stat did not give us our custom error message"
exit 1
fi
echo "### Filtering stat failed with our custom error message as expected"
fi
unset ANSIBLE_CONFIG
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,432 |
client certificate authentication towards Windows 11 with TLS 1.3
|
### Summary
As already explained in this Issue [#77768](https://github.com/ansible/ansible/issues/77768) a connection to a Windows 11 machine through WinRM certification auth is not working and throughs an error.
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.10]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/Qf6Kw9Jt5nbI/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/Qf6Kw9Jt5nbI/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 2.10.1
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
Ansible controller: Ubuntu 20.04
Target machine: Windows 11
```powershell
[System.Environment]::OSVersion.Version
Major Minor Build Revision
----- ----- ----- --------
10 0 22621 0
```
Client machine for PSSession: Windows 11
```powershell
[System.Environment]::OSVersion.Version
Major Minor Build Revision
----- ----- ----- --------
10 0 22000 0
```
### Steps to Reproduce
## Ansible controller
Tested with server cert validation:
```yml
ansible_connection: winrm
ansible_winrm_server_cert_validation: validate
ansible_winrm_cert_pem: ../../files/mgmtcerts/{{ lookup('env','UPN_USER') }}.pem
ansible_winrm_cert_key_pem: ../../files/mgmtcerts/{{ lookup('env','UPN_USER') }}-private.pem
ansible_winrm_ca_trust_path: ../../files/mgmtcerts/server.pem
ansible_winrm_transport: certificate
```
Tested without server cert validation:
```yml
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
ansible_winrm_cert_pem: ../../files/mgmtcerts/{{ lookup('env','UPN_USER') }}.pem
ansible_winrm_cert_key_pem: ../../files/mgmtcerts/{{ lookup('env','UPN_USER') }}-private.pem
# ansible_winrm_ca_trust_path: ../../files/mgmtcerts/server.pem
ansible_winrm_transport: certificate
```
Ini:
```yml
[test]
dev-gc01 ansible_host=winvm0
```
## Windows
[docs.ansible.com](https://docs.ansible.com/ansible/latest/user_guide/windows_winrm.html#certificate)
### Test Powershell connection on Windows client
Tested Powershell remote session from a windows client to the same target machine and it worked:
1. Import Server certificate as trusted CA
2. Import User certificate in personal store
3. Add hostname of target Windows to hosts file with ip address.
4. Execute Powershell command to connect to Windows OS
```powershell
Enter-PSSession -ComputerName winvm0 -UseSSL -CertificateThumbprint <usercert thumbprint>
[winvm0]: PS C:\Users\test\Documents> hostname
winvm0
```
### Expected Results
```bash
TASK [Gathering Facts] *********************************************************
ok: [dev-gc01]
```
### Actual Results
```console
fatal: [dev-gc01]: UNREACHABLE! => {"changed": false, "msg": "certificate: HTTPSConnectionPool(host='winvm0', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(\"bad handshake: Error([('SSL routines', 'tls_process_server_certificate', 'certificate verify failed')])\")))", "unreachable": true}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79432
|
https://github.com/ansible/ansible/pull/79434
|
1bda6750f5f4fb8b01de21d1949b02d7547ff838
|
493ef4a559362d874b22d362fe3423a4410c6f70
| 2022-11-21T16:26:20Z |
python
| 2022-11-21T20:46:48Z |
docs/docsite/rst/os_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents::
:local:
:depth: 2
What is WinRM?
----------------
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
.. Warning::
Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
the latest releases typically fail. This is a known problem that occurs
deep within the Python stack and cannot be changed by Ansible. The only
workaround today is to set the environment variable ``no_proxy=*`` and
avoid using Kerberos auth.
.. _winrm_auth:
WinRM authentication options
-----------------------------
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
.. _winrm_basic:
Basic
^^^^^^
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
.. _winrm_certificate:
Certificate
^^^^^^^^^^^^
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
.._winrm_certificate_generate:
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
.. _winrm_certificate_import:
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. _winrm_certificate_mapping:
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account:
.. code-block:: powershell
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
.. _winrm_ntlm:
NTLM
^^^^^
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
.. _winrm_kerberos:
Kerberos
^^^^^^^^^
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set:
.. code-block:: yaml
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
.. _winrm_kerberos_install:
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Through Yum (RHEL/Centos/Fedora for the older version)
yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
# Through DNF (RHEL/Centos/Fedora for the newer version)
dnf -y install gcc python3-devel krb5-devel krb5-libs krb5-workstation
# Through Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Through Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Through Pkg (FreeBSD)
sudo pkg install security/krb5
# Through OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Through Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
.. note::
While Ansible has supported Kerberos auth through ``pywinrm`` for some
time, optional features or more secure options may only be available in
newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
recommended you upgrade each version to the latest available to resolve
any warnings or errors. This can be done through tools like ``pip`` or a
system package manager like ``dnf``, ``yum``, ``apt`` but the package
names and versions available may differ between tools.
.. _winrm_kerberos_config:
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
.. _winrm_kerberos_ticket_auto:
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
through the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
.. _winrm_kerberos_ticket_manual:
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
.. _winrm_kerberos_troubleshoot:
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* If you connect using an IP address you will get the error message `Server not found in Kerberos database`.
* To determine if you are connecting using an IP address or an FQDN run your playbook (or call the ``win_ping`` module) using the `-vvv` flag.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run:
.. code-block:: console
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
.. _winrm_credssp:
CredSSP
^^^^^^^
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below:
.. code-block:: yaml
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
.. _winrm_credssp_install:
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
.. _winrm_credssp_tls:
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
.. _winrm _credssp_cert:
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP:
.. code-block:: powershell
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
.. _winrm_nonadmin:
Non-Administrator Accounts
---------------------------
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
.. _winrm_encrypt:
WinRM Encryption
-----------------
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
If in a domain environment, ADCS can create a certificate for the host that
is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
.. Note:: Message encryption over HTTP requires pywinrm>=0.3.0.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
.. _winrm_inventory:
Inventory Options
------------------
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line through --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
.. _winrm_ipv6:
IPv6 Addresses
---------------
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
.. _winrm_https:
HTTPS Certificate Validation
-----------------------------
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
----------------
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to:
.. code-block:: ansible-output
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller:
.. code-block:: shell
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated:
.. code-block:: console
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
.. _winrm_limitations:
WinRM limitations
------------------
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,397 |
Add more complex examples for to_datetime filter docs
|
### Summary
I am attempting to subtract two dates that are in ISO 8601 Nano format (ex: 2022-11-15T03:23:13.686956868Z) and the [documentation](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#handling-dates-and-times) doesn't have any examples for that or the ISO 8601 Micro format (ex: 2021-12-15T16:06:24.400087Z), which are the most formats I encounter from ansible module outputs.
Can the section be expanded? Assuming that to_datetime filter can handle these ISO formats
### Issue Type
Documentation Report
### Component Name
to_datetime
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
RHEL 8
### Additional Information
Flesh out documentation to cover how to manipulate dates for common date formats
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79397
|
https://github.com/ansible/ansible/pull/79417
|
b148fd8dd74c8599f809f71117a86577ccfb0638
|
505b29b2a981eabb2dd84bc66d37704bab91c3f9
| 2022-11-17T00:36:04Z |
python
| 2022-11-23T17:27:09Z |
changelogs/fragments/strftime_docs.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,397 |
Add more complex examples for to_datetime filter docs
|
### Summary
I am attempting to subtract two dates that are in ISO 8601 Nano format (ex: 2022-11-15T03:23:13.686956868Z) and the [documentation](https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#handling-dates-and-times) doesn't have any examples for that or the ISO 8601 Micro format (ex: 2021-12-15T16:06:24.400087Z), which are the most formats I encounter from ansible module outputs.
Can the section be expanded? Assuming that to_datetime filter can handle these ISO formats
### Issue Type
Documentation Report
### Component Name
to_datetime
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.6]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.7 (default, Sep 13 2021, 08:18:39) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
RHEL 8
### Additional Information
Flesh out documentation to cover how to manipulate dates for common date formats
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79397
|
https://github.com/ansible/ansible/pull/79417
|
b148fd8dd74c8599f809f71117a86577ccfb0638
|
505b29b2a981eabb2dd84bc66d37704bab91c3f9
| 2022-11-17T00:36:04Z |
python
| 2022-11-23T17:27:09Z |
lib/ansible/plugins/filter/strftime.yml
|
DOCUMENTATION:
name: strftime
version_added: "2.4"
short_description: date formating
description:
- Using Python's C(strftime) function, take a data formating string and a date/time to create a formated date.
notes:
- This is a passthrough to Python's C(stftime).
positional: _input, second, utc
options:
_input:
description:
- A formating string following C(stftime) conventions.
- See L(the Python documentation, https://docs.python.org/3/library/datetime.html#strftime-strptime-behavior) for a reference.
type: str
required: true
second:
description: Datetime in seconds from C(epoch) to format, if not supplied C(gmttime/localtime) will be used.
type: int
utc:
description: Whether time supplied is in UTC.
type: bool
default: false
EXAMPLES: |
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
RETURN:
_value:
description: A formatted date/time string.
type: str
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,928 |
Failed to get Debian minor version
|
### Summary
This is a follow up to issue #74481, which was closed, but the fix doesn't address the problem described in this issues.
Here a (slightly modified) copy of my comment on the above issue:
PR #74721 doesn't solve the problem reported in issue #74481.
This PR adds a new `ansible_distribution_minor_version` fact.
But issue #74481 is about the `ansible_distribution_version` fact sometimes doesn't include the minor version.
Here a list of `ansible_distribution_version` facts gathered for my Debian hosts:
6.0.8
7.6
7.7
7.11
8
8.2
8.10
8.11
9.6
9.12
9.13
10
11
You see that for Debian 8 `ansible_distribution_version` is sometimes with and without the minor version.
From Debian 10 the fact doesn't include the minor version.
For CentOS `ansible_distribution_version` always include the minor version:
7.4
7.5
7.6
7.8
7.9
8.3
8.4
8.5
To make `ansible_distribution_version` consistent between distributions the minor version should be added to this fact for Debian hosts.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/setup.py
### Ansible Version
```console
ansible [core 2.12.6]
```
### Configuration
```console
NA
```
### OS / Environment
Debian
### Steps to Reproduce
NA
### Expected Results
NA
### Actual Results
```console
NA
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77928
|
https://github.com/ansible/ansible/pull/79227
|
505b29b2a981eabb2dd84bc66d37704bab91c3f9
|
f79a54ae22b59d4c9bab0fb71d95c63b2e4b834b
| 2022-05-30T06:30:58Z |
python
| 2022-11-23T19:44:15Z |
changelogs/fragments/79227-update-vendored-distro.yaml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,928 |
Failed to get Debian minor version
|
### Summary
This is a follow up to issue #74481, which was closed, but the fix doesn't address the problem described in this issues.
Here a (slightly modified) copy of my comment on the above issue:
PR #74721 doesn't solve the problem reported in issue #74481.
This PR adds a new `ansible_distribution_minor_version` fact.
But issue #74481 is about the `ansible_distribution_version` fact sometimes doesn't include the minor version.
Here a list of `ansible_distribution_version` facts gathered for my Debian hosts:
6.0.8
7.6
7.7
7.11
8
8.2
8.10
8.11
9.6
9.12
9.13
10
11
You see that for Debian 8 `ansible_distribution_version` is sometimes with and without the minor version.
From Debian 10 the fact doesn't include the minor version.
For CentOS `ansible_distribution_version` always include the minor version:
7.4
7.5
7.6
7.8
7.9
8.3
8.4
8.5
To make `ansible_distribution_version` consistent between distributions the minor version should be added to this fact for Debian hosts.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/setup.py
### Ansible Version
```console
ansible [core 2.12.6]
```
### Configuration
```console
NA
```
### OS / Environment
Debian
### Steps to Reproduce
NA
### Expected Results
NA
### Actual Results
```console
NA
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77928
|
https://github.com/ansible/ansible/pull/79227
|
505b29b2a981eabb2dd84bc66d37704bab91c3f9
|
f79a54ae22b59d4c9bab0fb71d95c63b2e4b834b
| 2022-05-30T06:30:58Z |
python
| 2022-11-23T19:44:15Z |
lib/ansible/module_utils/distro/_distro.py
|
# Copyright 2015,2016,2017 Nir Cohen
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# A local copy of the license can be found in licenses/Apache-License.txt
#
# Modifications to this code have been made by Ansible Project
"""
The ``distro`` package (``distro`` stands for Linux Distribution) provides
information about the Linux distribution it runs on, such as a reliable
machine-readable distro ID, or version information.
It is the recommended replacement for Python's original
:py:func:`platform.linux_distribution` function, but it provides much more
functionality. An alternative implementation became necessary because Python
3.5 deprecated this function, and Python 3.8 removed it altogether. Its
predecessor function :py:func:`platform.dist` was already deprecated since
Python 2.6 and removed in Python 3.8. Still, there are many cases in which
access to OS distribution information is needed. See `Python issue 1322
<https://bugs.python.org/issue1322>`_ for more information.
"""
import logging
import os
import re
import shlex
import subprocess
import sys
import warnings
__version__ = "1.6.0"
# Use `if False` to avoid an ImportError on Python 2. After dropping Python 2
# support, can use typing.TYPE_CHECKING instead. See:
# https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING
if False: # pragma: nocover
from typing import (
Any,
Callable,
Dict,
Iterable,
Optional,
Sequence,
TextIO,
Tuple,
Type,
TypedDict,
Union,
)
VersionDict = TypedDict(
"VersionDict", {"major": str, "minor": str, "build_number": str}
)
InfoDict = TypedDict(
"InfoDict",
{
"id": str,
"version": str,
"version_parts": VersionDict,
"like": str,
"codename": str,
},
)
_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc")
_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib")
_OS_RELEASE_BASENAME = "os-release"
#: Translation table for normalizing the "ID" attribute defined in os-release
#: files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as defined in the os-release file, translated to lower case,
#: with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_OS_ID = {
"ol": "oracle", # Oracle Linux
"opensuse-leap": "opensuse", # Newer versions of OpenSuSE report as opensuse-leap
}
#: Translation table for normalizing the "Distributor ID" attribute returned by
#: the lsb_release command, for use by the :func:`distro.id` method.
#:
#: * Key: Value as returned by the lsb_release command, translated to lower
#: case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_LSB_ID = {
"enterpriseenterpriseas": "oracle", # Oracle Enterprise Linux 4
"enterpriseenterpriseserver": "oracle", # Oracle Linux 5
"redhatenterpriseworkstation": "rhel", # RHEL 6, 7 Workstation
"redhatenterpriseserver": "rhel", # RHEL 6, 7 Server
"redhatenterprisecomputenode": "rhel", # RHEL 6 ComputeNode
}
#: Translation table for normalizing the distro ID derived from the file name
#: of distro release files, for use by the :func:`distro.id` method.
#:
#: * Key: Value as derived from the file name of a distro release file,
#: translated to lower case, with blanks translated to underscores.
#:
#: * Value: Normalized value.
NORMALIZED_DISTRO_ID = {
"redhat": "rhel", # RHEL 6.x, 7.x
}
# Pattern for content of distro release file (reversed)
_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile(
r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)"
)
# Pattern for base file name of distro release file
_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$")
# Base file names to be ignored when searching for distro release file
_DISTRO_RELEASE_IGNORE_BASENAMES = (
"debian_version",
"lsb-release",
"oem-release",
_OS_RELEASE_BASENAME,
"system-release",
"plesk-release",
"iredmail-release",
)
#
# Python 2.6 does not have subprocess.check_output so replicate it here
#
def _my_check_output(*popenargs, **kwargs):
r"""Run command with arguments and return its output as a byte string.
If the exit code was non-zero it raises a CalledProcessError. The
CalledProcessError object will have the return code in the returncode
attribute and output in the output attribute.
The arguments are the same as for the Popen constructor. Example:
>>> check_output(["ls", "-l", "/dev/null"])
'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
The stdout argument is not allowed as it is used internally.
To capture standard error in the result, use stderr=STDOUT.
>>> check_output(["/bin/sh", "-c",
... "ls -l non_existent_file ; exit 0"],
... stderr=STDOUT)
'ls: non_existent_file: No such file or directory\n'
This is a backport of Python-2.7's check output to Python-2.6
"""
if 'stdout' in kwargs:
raise ValueError(
'stdout argument not allowed, it will be overridden.'
)
process = subprocess.Popen(
stdout=subprocess.PIPE, *popenargs, **kwargs
)
output, unused_err = process.communicate()
retcode = process.poll()
if retcode:
cmd = kwargs.get("args")
if cmd is None:
cmd = popenargs[0]
# Deviation from Python-2.7: Python-2.6's CalledProcessError does not
# have an argument for the stdout so simply omit it.
raise subprocess.CalledProcessError(retcode, cmd)
return output
try:
_check_output = subprocess.check_output
except AttributeError:
_check_output = _my_check_output
def linux_distribution(full_distribution_name=True):
# type: (bool) -> Tuple[str, str, str]
"""
.. deprecated:: 1.6.0
:func:`distro.linux_distribution()` is deprecated. It should only be
used as a compatibility shim with Python's
:py:func:`platform.linux_distribution()`. Please use :func:`distro.id`,
:func:`distro.version` and :func:`distro.name` instead.
Return information about the current OS distribution as a tuple
``(id_name, version, codename)`` with items as follows:
* ``id_name``: If *full_distribution_name* is false, the result of
:func:`distro.id`. Otherwise, the result of :func:`distro.name`.
* ``version``: The result of :func:`distro.version`.
* ``codename``: The result of :func:`distro.codename`.
The interface of this function is compatible with the original
:py:func:`platform.linux_distribution` function, supporting a subset of
its parameters.
The data it returns may not exactly be the same, because it uses more data
sources than the original function, and that may lead to different data if
the OS distribution is not consistent across multiple data sources it
provides (there are indeed such distributions ...).
Another reason for differences is the fact that the :func:`distro.id`
method normalizes the distro ID string to a reliable machine-readable value
for a number of popular OS distributions.
"""
warnings.warn(
"distro.linux_distribution() is deprecated. It should only be used as a "
"compatibility shim with Python's platform.linux_distribution(). Please use "
"distro.id(), distro.version() and distro.name() instead.",
DeprecationWarning,
stacklevel=2,
)
return _distro.linux_distribution(full_distribution_name)
def id():
# type: () -> str
"""
Return the distro ID of the current distribution, as a
machine-readable string.
For a number of OS distributions, the returned distro ID value is
*reliable*, in the sense that it is documented and that it does not change
across releases of the distribution.
This package maintains the following reliable distro ID values:
============== =========================================
Distro ID Distribution
============== =========================================
"ubuntu" Ubuntu
"debian" Debian
"rhel" RedHat Enterprise Linux
"centos" CentOS
"fedora" Fedora
"sles" SUSE Linux Enterprise Server
"opensuse" openSUSE
"amazon" Amazon Linux
"arch" Arch Linux
"cloudlinux" CloudLinux OS
"exherbo" Exherbo Linux
"gentoo" GenToo Linux
"ibm_powerkvm" IBM PowerKVM
"kvmibm" KVM for IBM z Systems
"linuxmint" Linux Mint
"mageia" Mageia
"mandriva" Mandriva Linux
"parallels" Parallels
"pidora" Pidora
"raspbian" Raspbian
"oracle" Oracle Linux (and Oracle Enterprise Linux)
"scientific" Scientific Linux
"slackware" Slackware
"xenserver" XenServer
"openbsd" OpenBSD
"netbsd" NetBSD
"freebsd" FreeBSD
"midnightbsd" MidnightBSD
============== =========================================
If you have a need to get distros for reliable IDs added into this set,
or if you find that the :func:`distro.id` function returns a different
distro ID for one of the listed distros, please create an issue in the
`distro issue tracker`_.
**Lookup hierarchy and transformations:**
First, the ID is obtained from the following sources, in the specified
order. The first available and non-empty value is used:
* the value of the "ID" attribute of the os-release file,
* the value of the "Distributor ID" attribute returned by the lsb_release
command,
* the first part of the file name of the distro release file,
The so determined ID value then passes the following transformations,
before it is returned by this method:
* it is translated to lower case,
* blanks (which should not be there anyway) are translated to underscores,
* a normalization of the ID is performed, based upon
`normalization tables`_. The purpose of this normalization is to ensure
that the ID is as reliable as possible, even across incompatible changes
in the OS distributions. A common reason for an incompatible change is
the addition of an os-release file, or the addition of the lsb_release
command, with ID values that differ from what was previously determined
from the distro release file name.
"""
return _distro.id()
def name(pretty=False):
# type: (bool) -> str
"""
Return the name of the current OS distribution, as a human-readable
string.
If *pretty* is false, the name is returned without version or codename.
(e.g. "CentOS Linux")
If *pretty* is true, the version and codename are appended.
(e.g. "CentOS Linux 7.1.1503 (Core)")
**Lookup hierarchy:**
The name is obtained from the following sources, in the specified order.
The first available and non-empty value is used:
* If *pretty* is false:
- the value of the "NAME" attribute of the os-release file,
- the value of the "Distributor ID" attribute returned by the lsb_release
command,
- the value of the "<name>" field of the distro release file.
* If *pretty* is true:
- the value of the "PRETTY_NAME" attribute of the os-release file,
- the value of the "Description" attribute returned by the lsb_release
command,
- the value of the "<name>" field of the distro release file, appended
with the value of the pretty version ("<version_id>" and "<codename>"
fields) of the distro release file, if available.
"""
return _distro.name(pretty)
def version(pretty=False, best=False):
# type: (bool, bool) -> str
"""
Return the version of the current OS distribution, as a human-readable
string.
If *pretty* is false, the version is returned without codename (e.g.
"7.0").
If *pretty* is true, the codename in parenthesis is appended, if the
codename is non-empty (e.g. "7.0 (Maipo)").
Some distributions provide version numbers with different precisions in
the different sources of distribution information. Examining the different
sources in a fixed priority order does not always yield the most precise
version (e.g. for Debian 8.2, or CentOS 7.1).
The *best* parameter can be used to control the approach for the returned
version:
If *best* is false, the first non-empty version number in priority order of
the examined sources is returned.
If *best* is true, the most precise version number out of all examined
sources is returned.
**Lookup hierarchy:**
In all cases, the version number is obtained from the following sources.
If *best* is false, this order represents the priority order:
* the value of the "VERSION_ID" attribute of the os-release file,
* the value of the "Release" attribute returned by the lsb_release
command,
* the version number parsed from the "<version_id>" field of the first line
of the distro release file,
* the version number parsed from the "PRETTY_NAME" attribute of the
os-release file, if it follows the format of the distro release files.
* the version number parsed from the "Description" attribute returned by
the lsb_release command, if it follows the format of the distro release
files.
"""
return _distro.version(pretty, best)
def version_parts(best=False):
# type: (bool) -> Tuple[str, str, str]
"""
Return the version of the current OS distribution as a tuple
``(major, minor, build_number)`` with items as follows:
* ``major``: The result of :func:`distro.major_version`.
* ``minor``: The result of :func:`distro.minor_version`.
* ``build_number``: The result of :func:`distro.build_number`.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.version_parts(best)
def major_version(best=False):
# type: (bool) -> str
"""
Return the major version of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The major version is the first
part of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.major_version(best)
def minor_version(best=False):
# type: (bool) -> str
"""
Return the minor version of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The minor version is the second
part of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.minor_version(best)
def build_number(best=False):
# type: (bool) -> str
"""
Return the build number of the current OS distribution, as a string,
if provided.
Otherwise, the empty string is returned. The build number is the third part
of the dot-separated version string.
For a description of the *best* parameter, see the :func:`distro.version`
method.
"""
return _distro.build_number(best)
def like():
# type: () -> str
"""
Return a space-separated list of distro IDs of distributions that are
closely related to the current OS distribution in regards to packaging
and programming interfaces, for example distributions the current
distribution is a derivative from.
**Lookup hierarchy:**
This information item is only provided by the os-release file.
For details, see the description of the "ID_LIKE" attribute in the
`os-release man page
<http://www.freedesktop.org/software/systemd/man/os-release.html>`_.
"""
return _distro.like()
def codename():
# type: () -> str
"""
Return the codename for the release of the current OS distribution,
as a string.
If the distribution does not have a codename, an empty string is returned.
Note that the returned codename is not always really a codename. For
example, openSUSE returns "x86_64". This function does not handle such
cases in any special way and just returns the string it finds, if any.
**Lookup hierarchy:**
* the codename within the "VERSION" attribute of the os-release file, if
provided,
* the value of the "Codename" attribute returned by the lsb_release
command,
* the value of the "<codename>" field of the distro release file.
"""
return _distro.codename()
def info(pretty=False, best=False):
# type: (bool, bool) -> InfoDict
"""
Return certain machine-readable information items about the current OS
distribution in a dictionary, as shown in the following example:
.. sourcecode:: python
{
'id': 'rhel',
'version': '7.0',
'version_parts': {
'major': '7',
'minor': '0',
'build_number': ''
},
'like': 'fedora',
'codename': 'Maipo'
}
The dictionary structure and keys are always the same, regardless of which
information items are available in the underlying data sources. The values
for the various keys are as follows:
* ``id``: The result of :func:`distro.id`.
* ``version``: The result of :func:`distro.version`.
* ``version_parts -> major``: The result of :func:`distro.major_version`.
* ``version_parts -> minor``: The result of :func:`distro.minor_version`.
* ``version_parts -> build_number``: The result of
:func:`distro.build_number`.
* ``like``: The result of :func:`distro.like`.
* ``codename``: The result of :func:`distro.codename`.
For a description of the *pretty* and *best* parameters, see the
:func:`distro.version` method.
"""
return _distro.info(pretty, best)
def os_release_info():
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information items
from the os-release file data source of the current OS distribution.
See `os-release file`_ for details about these information items.
"""
return _distro.os_release_info()
def lsb_release_info():
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information items
from the lsb_release command data source of the current OS distribution.
See `lsb_release command output`_ for details about these information
items.
"""
return _distro.lsb_release_info()
def distro_release_info():
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information items
from the distro release file data source of the current OS distribution.
See `distro release file`_ for details about these information items.
"""
return _distro.distro_release_info()
def uname_info():
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information items
from the distro release file data source of the current OS distribution.
"""
return _distro.uname_info()
def os_release_attr(attribute):
# type: (str) -> str
"""
Return a single named information item from the os-release file data source
of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `os-release file`_ for details about these information items.
"""
return _distro.os_release_attr(attribute)
def lsb_release_attr(attribute):
# type: (str) -> str
"""
Return a single named information item from the lsb_release command output
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `lsb_release command output`_ for details about these information
items.
"""
return _distro.lsb_release_attr(attribute)
def distro_release_attr(attribute):
# type: (str) -> str
"""
Return a single named information item from the distro release file
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
See `distro release file`_ for details about these information items.
"""
return _distro.distro_release_attr(attribute)
def uname_attr(attribute):
# type: (str) -> str
"""
Return a single named information item from the distro release file
data source of the current OS distribution.
Parameters:
* ``attribute`` (string): Key of the information item.
Returns:
* (string): Value of the information item, if the item exists.
The empty string, if the item does not exist.
"""
return _distro.uname_attr(attribute)
try:
from functools import cached_property
except ImportError:
# Python < 3.8
class cached_property(object): # type: ignore
"""A version of @property which caches the value. On access, it calls the
underlying function and sets the value in `__dict__` so future accesses
will not re-call the property.
"""
def __init__(self, f):
# type: (Callable[[Any], Any]) -> None
self._fname = f.__name__
self._f = f
def __get__(self, obj, owner):
# type: (Any, Type[Any]) -> Any
assert obj is not None, "call {0} on an instance".format(self._fname)
ret = obj.__dict__[self._fname] = self._f(obj)
return ret
class LinuxDistribution(object):
"""
Provides information about a OS distribution.
This package creates a private module-global instance of this class with
default initialization arguments, that is used by the
`consolidated accessor functions`_ and `single source accessor functions`_.
By using default initialization arguments, that module-global instance
returns data about the current OS distribution (i.e. the distro this
package runs on).
Normally, it is not necessary to create additional instances of this class.
However, in situations where control is needed over the exact data sources
that are used, instances of this class can be created with a specific
distro release file, or a specific os-release file, or without invoking the
lsb_release command.
"""
def __init__(
self,
include_lsb=True,
os_release_file="",
distro_release_file="",
include_uname=True,
root_dir=None,
):
# type: (bool, str, str, bool, Optional[str]) -> None
"""
The initialization method of this class gathers information from the
available data sources, and stores that in private instance attributes.
Subsequent access to the information items uses these private instance
attributes, so that the data sources are read only once.
Parameters:
* ``include_lsb`` (bool): Controls whether the
`lsb_release command output`_ is included as a data source.
If the lsb_release command is not available in the program execution
path, the data source for the lsb_release command will be empty.
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is to be used as a data source.
An empty string (the default) will cause the default path name to
be used (see `os-release file`_ for details).
If the specified or defaulted os-release file does not exist, the
data source for the os-release file will be empty.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is to be used as a data source.
An empty string (the default) will cause a default search algorithm
to be used (see `distro release file`_ for details).
If the specified distro release file does not exist, or if no default
distro release file can be found, the data source for the distro
release file will be empty.
* ``include_uname`` (bool): Controls whether uname command output is
included as a data source. If the uname command is not available in
the program execution path the data source for the uname command will
be empty.
* ``root_dir`` (string): The absolute path to the root directory to use
to find distro-related information files.
Public instance attributes:
* ``os_release_file`` (string): The path name of the
`os-release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``distro_release_file`` (string): The path name of the
`distro release file`_ that is actually used as a data source. The
empty string if no distro release file is used as a data source.
* ``include_lsb`` (bool): The result of the ``include_lsb`` parameter.
This controls whether the lsb information will be loaded.
* ``include_uname`` (bool): The result of the ``include_uname``
parameter. This controls whether the uname information will
be loaded.
Raises:
* :py:exc:`IOError`: Some I/O issue with an os-release file or distro
release file.
* :py:exc:`subprocess.CalledProcessError`: The lsb_release command had
some issue (other than not being available in the program execution
path).
* :py:exc:`UnicodeError`: A data source has unexpected characters or
uses an unexpected encoding.
"""
self.root_dir = root_dir
self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR
self.usr_lib_dir = (
os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR
)
if os_release_file:
self.os_release_file = os_release_file
else:
etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME)
usr_lib_os_release_file = os.path.join(
self.usr_lib_dir, _OS_RELEASE_BASENAME
)
# NOTE: The idea is to respect order **and** have it set
# at all times for API backwards compatibility.
if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile(
usr_lib_os_release_file
):
self.os_release_file = etc_dir_os_release_file
else:
self.os_release_file = usr_lib_os_release_file
self.distro_release_file = distro_release_file or "" # updated later
self.include_lsb = include_lsb
self.include_uname = include_uname
def __repr__(self):
# type: () -> str
"""Return repr of all info"""
return (
"LinuxDistribution("
"os_release_file={self.os_release_file!r}, "
"distro_release_file={self.distro_release_file!r}, "
"include_lsb={self.include_lsb!r}, "
"include_uname={self.include_uname!r}, "
"_os_release_info={self._os_release_info!r}, "
"_lsb_release_info={self._lsb_release_info!r}, "
"_distro_release_info={self._distro_release_info!r}, "
"_uname_info={self._uname_info!r})".format(self=self)
)
def linux_distribution(self, full_distribution_name=True):
# type: (bool) -> Tuple[str, str, str]
"""
Return information about the OS distribution that is compatible
with Python's :func:`platform.linux_distribution`, supporting a subset
of its parameters.
For details, see :func:`distro.linux_distribution`.
"""
return (
self.name() if full_distribution_name else self.id(),
self.version(),
self.codename(),
)
def id(self):
# type: () -> str
"""Return the distro ID of the OS distribution, as a string.
For details, see :func:`distro.id`.
"""
def normalize(distro_id, table):
# type: (str, Dict[str, str]) -> str
distro_id = distro_id.lower().replace(" ", "_")
return table.get(distro_id, distro_id)
distro_id = self.os_release_attr("id")
if distro_id:
return normalize(distro_id, NORMALIZED_OS_ID)
distro_id = self.lsb_release_attr("distributor_id")
if distro_id:
return normalize(distro_id, NORMALIZED_LSB_ID)
distro_id = self.distro_release_attr("id")
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
distro_id = self.uname_attr("id")
if distro_id:
return normalize(distro_id, NORMALIZED_DISTRO_ID)
return ""
def name(self, pretty=False):
# type: (bool) -> str
"""
Return the name of the OS distribution, as a string.
For details, see :func:`distro.name`.
"""
name = (
self.os_release_attr("name")
or self.lsb_release_attr("distributor_id")
or self.distro_release_attr("name")
or self.uname_attr("name")
)
if pretty:
name = self.os_release_attr("pretty_name") or self.lsb_release_attr(
"description"
)
if not name:
name = self.distro_release_attr("name") or self.uname_attr("name")
version = self.version(pretty=True)
if version:
name = name + " " + version
return name or ""
def version(self, pretty=False, best=False):
# type: (bool, bool) -> str
"""
Return the version of the OS distribution, as a string.
For details, see :func:`distro.version`.
"""
versions = [
self.os_release_attr("version_id"),
self.lsb_release_attr("release"),
self.distro_release_attr("version_id"),
self._parse_distro_release_content(self.os_release_attr("pretty_name")).get(
"version_id", ""
),
self._parse_distro_release_content(
self.lsb_release_attr("description")
).get("version_id", ""),
self.uname_attr("release"),
]
version = ""
if best:
# This algorithm uses the last version in priority order that has
# the best precision. If the versions are not in conflict, that
# does not matter; otherwise, using the last one instead of the
# first one might be considered a surprise.
for v in versions:
if v.count(".") > version.count(".") or version == "":
version = v
else:
for v in versions:
if v != "":
version = v
break
if pretty and version and self.codename():
version = "{0} ({1})".format(version, self.codename())
return version
def version_parts(self, best=False):
# type: (bool) -> Tuple[str, str, str]
"""
Return the version of the OS distribution, as a tuple of version
numbers.
For details, see :func:`distro.version_parts`.
"""
version_str = self.version(best=best)
if version_str:
version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?")
matches = version_regex.match(version_str)
if matches:
major, minor, build_number = matches.groups()
return major, minor or "", build_number or ""
return "", "", ""
def major_version(self, best=False):
# type: (bool) -> str
"""
Return the major version number of the current distribution.
For details, see :func:`distro.major_version`.
"""
return self.version_parts(best)[0]
def minor_version(self, best=False):
# type: (bool) -> str
"""
Return the minor version number of the current distribution.
For details, see :func:`distro.minor_version`.
"""
return self.version_parts(best)[1]
def build_number(self, best=False):
# type: (bool) -> str
"""
Return the build number of the current distribution.
For details, see :func:`distro.build_number`.
"""
return self.version_parts(best)[2]
def like(self):
# type: () -> str
"""
Return the IDs of distributions that are like the OS distribution.
For details, see :func:`distro.like`.
"""
return self.os_release_attr("id_like") or ""
def codename(self):
# type: () -> str
"""
Return the codename of the OS distribution.
For details, see :func:`distro.codename`.
"""
try:
# Handle os_release specially since distros might purposefully set
# this to empty string to have no codename
return self._os_release_info["codename"]
except KeyError:
return (
self.lsb_release_attr("codename")
or self.distro_release_attr("codename")
or ""
)
def info(self, pretty=False, best=False):
# type: (bool, bool) -> InfoDict
"""
Return certain machine-readable information about the OS
distribution.
For details, see :func:`distro.info`.
"""
return dict(
id=self.id(),
version=self.version(pretty, best),
version_parts=dict(
major=self.major_version(best),
minor=self.minor_version(best),
build_number=self.build_number(best),
),
like=self.like(),
codename=self.codename(),
)
def os_release_info(self):
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information
items from the os-release file data source of the OS distribution.
For details, see :func:`distro.os_release_info`.
"""
return self._os_release_info
def lsb_release_info(self):
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information
items from the lsb_release command data source of the OS
distribution.
For details, see :func:`distro.lsb_release_info`.
"""
return self._lsb_release_info
def distro_release_info(self):
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information
items from the distro release file data source of the OS
distribution.
For details, see :func:`distro.distro_release_info`.
"""
return self._distro_release_info
def uname_info(self):
# type: () -> Dict[str, str]
"""
Return a dictionary containing key-value pairs for the information
items from the uname command data source of the OS distribution.
For details, see :func:`distro.uname_info`.
"""
return self._uname_info
def os_release_attr(self, attribute):
# type: (str) -> str
"""
Return a single named information item from the os-release file data
source of the OS distribution.
For details, see :func:`distro.os_release_attr`.
"""
return self._os_release_info.get(attribute, "")
def lsb_release_attr(self, attribute):
# type: (str) -> str
"""
Return a single named information item from the lsb_release command
output data source of the OS distribution.
For details, see :func:`distro.lsb_release_attr`.
"""
return self._lsb_release_info.get(attribute, "")
def distro_release_attr(self, attribute):
# type: (str) -> str
"""
Return a single named information item from the distro release file
data source of the OS distribution.
For details, see :func:`distro.distro_release_attr`.
"""
return self._distro_release_info.get(attribute, "")
def uname_attr(self, attribute):
# type: (str) -> str
"""
Return a single named information item from the uname command
output data source of the OS distribution.
For details, see :func:`distro.uname_attr`.
"""
return self._uname_info.get(attribute, "")
@cached_property
def _os_release_info(self):
# type: () -> Dict[str, str]
"""
Get the information items from the specified os-release file.
Returns:
A dictionary containing all information items.
"""
if os.path.isfile(self.os_release_file):
with open(self.os_release_file) as release_file:
return self._parse_os_release_content(release_file)
return {}
@staticmethod
def _parse_os_release_content(lines):
# type: (TextIO) -> Dict[str, str]
"""
Parse the lines of an os-release file.
Parameters:
* lines: Iterable through the lines in the os-release file.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
lexer = shlex.shlex(lines, posix=True)
lexer.whitespace_split = True
# The shlex module defines its `wordchars` variable using literals,
# making it dependent on the encoding of the Python source file.
# In Python 2.6 and 2.7, the shlex source file is encoded in
# 'iso-8859-1', and the `wordchars` variable is defined as a byte
# string. This causes a UnicodeDecodeError to be raised when the
# parsed content is a unicode object. The following fix resolves that
# (... but it should be fixed in shlex...):
if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes):
lexer.wordchars = lexer.wordchars.decode("iso-8859-1")
tokens = list(lexer)
for token in tokens:
# At this point, all shell-like parsing has been done (i.e.
# comments processed, quotes and backslash escape sequences
# processed, multi-line values assembled, trailing newlines
# stripped, etc.), so the tokens are now either:
# * variable assignments: var=value
# * commands or their arguments (not allowed in os-release)
if "=" in token:
k, v = token.split("=", 1)
props[k.lower()] = v
else:
# Ignore any tokens that are not variable assignments
pass
if "version_codename" in props:
# os-release added a version_codename field. Use that in
# preference to anything else Note that some distros purposefully
# do not have code names. They should be setting
# version_codename=""
props["codename"] = props["version_codename"]
elif "ubuntu_codename" in props:
# Same as above but a non-standard field name used on older Ubuntus
props["codename"] = props["ubuntu_codename"]
elif "version" in props:
# If there is no version_codename, parse it from the version
match = re.search(r"(\(\D+\))|,(\s+)?\D+", props["version"])
if match:
codename = match.group()
codename = codename.strip("()")
codename = codename.strip(",")
codename = codename.strip()
# codename appears within paranthese.
props["codename"] = codename
return props
@cached_property
def _lsb_release_info(self):
# type: () -> Dict[str, str]
"""
Get the information items from the lsb_release command output.
Returns:
A dictionary containing all information items.
"""
if not self.include_lsb:
return {}
with open(os.devnull, "wb") as devnull:
try:
cmd = ("lsb_release", "-a")
stdout = _check_output(cmd, stderr=devnull)
# Command not found or lsb_release returned error
except (OSError, subprocess.CalledProcessError):
return {}
content = self._to_str(stdout).splitlines()
return self._parse_lsb_release_content(content)
@staticmethod
def _parse_lsb_release_content(lines):
# type: (Iterable[str]) -> Dict[str, str]
"""
Parse the output of the lsb_release command.
Parameters:
* lines: Iterable through the lines of the lsb_release output.
Each line must be a unicode string or a UTF-8 encoded byte
string.
Returns:
A dictionary containing all information items.
"""
props = {}
for line in lines:
kv = line.strip("\n").split(":", 1)
if len(kv) != 2:
# Ignore lines without colon.
continue
k, v = kv
props.update({k.replace(" ", "_").lower(): v.strip()})
return props
@cached_property
def _uname_info(self):
# type: () -> Dict[str, str]
with open(os.devnull, "wb") as devnull:
try:
cmd = ("uname", "-rs")
stdout = _check_output(cmd, stderr=devnull)
except OSError:
return {}
content = self._to_str(stdout).splitlines()
return self._parse_uname_content(content)
@staticmethod
def _parse_uname_content(lines):
# type: (Sequence[str]) -> Dict[str, str]
props = {}
match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip())
if match:
name, version = match.groups()
# This is to prevent the Linux kernel version from
# appearing as the 'best' version on otherwise
# identifiable distributions.
if name == "Linux":
return {}
props["id"] = name.lower()
props["name"] = name
props["release"] = version
return props
@staticmethod
def _to_str(text):
# type: (Union[bytes, str]) -> str
encoding = sys.getfilesystemencoding()
encoding = "utf-8" if encoding == "ascii" else encoding
if sys.version_info[0] >= 3:
if isinstance(text, bytes):
return text.decode(encoding)
else:
if isinstance(text, unicode): # noqa pylint: disable=undefined-variable
return text.encode(encoding)
return text
@cached_property
def _distro_release_info(self):
# type: () -> Dict[str, str]
"""
Get the information items from the specified distro release file.
Returns:
A dictionary containing all information items.
"""
if self.distro_release_file:
# If it was specified, we use it and parse what we can, even if
# its file name or content does not match the expected pattern.
distro_info = self._parse_distro_release_file(self.distro_release_file)
basename = os.path.basename(self.distro_release_file)
# The file name pattern for user-specified distro release files
# is somewhat more tolerant (compared to when searching for the
# file), because we want to use what was specified as best as
# possible.
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if "name" in distro_info and "cloudlinux" in distro_info["name"].lower():
distro_info["id"] = "cloudlinux"
elif match:
distro_info["id"] = match.group(1)
return distro_info
else:
try:
basenames = os.listdir(self.etc_dir)
# We sort for repeatability in cases where there are multiple
# distro specific files; e.g. CentOS, Oracle, Enterprise all
# containing `redhat-release` on top of their own.
basenames.sort()
except OSError:
# This may occur when /etc is not readable but we can't be
# sure about the *-release files. Check common entries of
# /etc for information. If they turn out to not be there the
# error is handled in `_parse_distro_release_file()`.
basenames = [
"SuSE-release",
"arch-release",
"base-release",
"centos-release",
"fedora-release",
"gentoo-release",
"mageia-release",
"mandrake-release",
"mandriva-release",
"mandrivalinux-release",
"manjaro-release",
"oracle-release",
"redhat-release",
"sl-release",
"slackware-version",
]
for basename in basenames:
if basename in _DISTRO_RELEASE_IGNORE_BASENAMES:
continue
match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename)
if match:
filepath = os.path.join(self.etc_dir, basename)
distro_info = self._parse_distro_release_file(filepath)
if "name" in distro_info:
# The name is always present if the pattern matches
self.distro_release_file = filepath
distro_info["id"] = match.group(1)
if "cloudlinux" in distro_info["name"].lower():
distro_info["id"] = "cloudlinux"
return distro_info
return {}
def _parse_distro_release_file(self, filepath):
# type: (str) -> Dict[str, str]
"""
Parse a distro release file.
Parameters:
* filepath: Path name of the distro release file.
Returns:
A dictionary containing all information items.
"""
try:
with open(filepath) as fp:
# Only parse the first line. For instance, on SLES there
# are multiple lines. We don't want them...
return self._parse_distro_release_content(fp.readline())
except (OSError, IOError):
# Ignore not being able to read a specific, seemingly version
# related file.
# See https://github.com/python-distro/distro/issues/162
return {}
@staticmethod
def _parse_distro_release_content(line):
# type: (str) -> Dict[str, str]
"""
Parse a line from a distro release file.
Parameters:
* line: Line from the distro release file. Must be a unicode string
or a UTF-8 encoded byte string.
Returns:
A dictionary containing all information items.
"""
matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1])
distro_info = {}
if matches:
# regexp ensures non-None
distro_info["name"] = matches.group(3)[::-1]
if matches.group(2):
distro_info["version_id"] = matches.group(2)[::-1]
if matches.group(1):
distro_info["codename"] = matches.group(1)[::-1]
elif line:
distro_info["name"] = line.strip()
return distro_info
_distro = LinuxDistribution()
def main():
# type: () -> None
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(logging.StreamHandler(sys.stdout))
dist = _distro
logger.info("Name: %s", dist.name(pretty=True))
distribution_version = dist.version(pretty=True)
logger.info("Version: %s", distribution_version)
distribution_codename = dist.codename()
logger.info("Codename: %s", distribution_codename)
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,928 |
Failed to get Debian minor version
|
### Summary
This is a follow up to issue #74481, which was closed, but the fix doesn't address the problem described in this issues.
Here a (slightly modified) copy of my comment on the above issue:
PR #74721 doesn't solve the problem reported in issue #74481.
This PR adds a new `ansible_distribution_minor_version` fact.
But issue #74481 is about the `ansible_distribution_version` fact sometimes doesn't include the minor version.
Here a list of `ansible_distribution_version` facts gathered for my Debian hosts:
6.0.8
7.6
7.7
7.11
8
8.2
8.10
8.11
9.6
9.12
9.13
10
11
You see that for Debian 8 `ansible_distribution_version` is sometimes with and without the minor version.
From Debian 10 the fact doesn't include the minor version.
For CentOS `ansible_distribution_version` always include the minor version:
7.4
7.5
7.6
7.8
7.9
8.3
8.4
8.5
To make `ansible_distribution_version` consistent between distributions the minor version should be added to this fact for Debian hosts.
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/setup.py
### Ansible Version
```console
ansible [core 2.12.6]
```
### Configuration
```console
NA
```
### OS / Environment
Debian
### Steps to Reproduce
NA
### Expected Results
NA
### Actual Results
```console
NA
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77928
|
https://github.com/ansible/ansible/pull/79227
|
505b29b2a981eabb2dd84bc66d37704bab91c3f9
|
f79a54ae22b59d4c9bab0fb71d95c63b2e4b834b
| 2022-05-30T06:30:58Z |
python
| 2022-11-23T19:44:15Z |
test/sanity/ignore.txt
|
.azure-pipelines/scripts/publish-codecov.py replace-urlopen
docs/docsite/rst/dev_guide/testing/sanity/no-smart-quotes.rst no-smart-quotes
docs/docsite/rst/locales/ja/LC_MESSAGES/dev_guide.po no-smart-quotes # Translation of the no-smart-quotes rule
examples/scripts/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSCustomUseLiteralPath
examples/scripts/upgrade_to_ps3.ps1 pslint:PSUseApprovedVerbs
lib/ansible/cli/scripts/ansible_connection_cli_stub.py shebang
lib/ansible/config/base.yml no-unwanted-files
lib/ansible/executor/playbook_executor.py pylint:disallowed-name
lib/ansible/executor/powershell/async_watchdog.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/async_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/powershell/exec_wrapper.ps1 pslint:PSCustomUseLiteralPath
lib/ansible/executor/task_queue_manager.py pylint:disallowed-name
lib/ansible/keyword_desc.yml no-unwanted-files
lib/ansible/modules/apt.py validate-modules:parameter-invalid
lib/ansible/modules/apt_repository.py validate-modules:parameter-invalid
lib/ansible/modules/assemble.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/async_status.py use-argspec-type-path
lib/ansible/modules/async_status.py validate-modules!skip
lib/ansible/modules/async_wrapper.py ansible-doc!skip # not an actual module
lib/ansible/modules/async_wrapper.py pylint:ansible-bad-function # ignore, required
lib/ansible/modules/async_wrapper.py use-argspec-type-path
lib/ansible/modules/blockinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/blockinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/command.py validate-modules:doc-default-does-not-match-spec # _uses_shell is undocumented
lib/ansible/modules/command.py validate-modules:doc-missing-type
lib/ansible/modules/command.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/command.py validate-modules:undocumented-parameter
lib/ansible/modules/copy.py pylint:disallowed-name
lib/ansible/modules/copy.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/copy.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/copy.py validate-modules:undocumented-parameter
lib/ansible/modules/dnf.py validate-modules:parameter-invalid
lib/ansible/modules/file.py validate-modules:undocumented-parameter
lib/ansible/modules/find.py use-argspec-type-path # fix needed
lib/ansible/modules/git.py pylint:disallowed-name
lib/ansible/modules/git.py use-argspec-type-path
lib/ansible/modules/git.py validate-modules:doc-required-mismatch
lib/ansible/modules/iptables.py pylint:disallowed-name
lib/ansible/modules/lineinfile.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/lineinfile.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/package_facts.py validate-modules:doc-choices-do-not-match-spec
lib/ansible/modules/pip.py pylint:disallowed-name
lib/ansible/modules/replace.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:nonexistent-parameter-documented
lib/ansible/modules/service.py validate-modules:use-run-command-not-popen
lib/ansible/modules/stat.py validate-modules:doc-default-does-not-match-spec # get_md5 is undocumented
lib/ansible/modules/stat.py validate-modules:parameter-invalid
lib/ansible/modules/stat.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/stat.py validate-modules:undocumented-parameter
lib/ansible/modules/systemd_service.py validate-modules:parameter-invalid
lib/ansible/modules/systemd_service.py validate-modules:return-syntax-error
lib/ansible/modules/sysvinit.py validate-modules:return-syntax-error
lib/ansible/modules/uri.py validate-modules:doc-required-mismatch
lib/ansible/modules/user.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/user.py validate-modules:use-run-command-not-popen
lib/ansible/modules/yum.py pylint:disallowed-name
lib/ansible/modules/yum.py validate-modules:parameter-invalid
lib/ansible/modules/yum_repository.py validate-modules:doc-default-does-not-match-spec
lib/ansible/modules/yum_repository.py validate-modules:parameter-type-not-in-doc
lib/ansible/modules/yum_repository.py validate-modules:undocumented-parameter
lib/ansible/module_utils/compat/_selectors2.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/compat/_selectors2.py pylint:disallowed-name
lib/ansible/module_utils/compat/selinux.py import-2.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.5!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.6!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.7!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.8!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/compat/selinux.py import-3.9!skip # pass/fail depends on presence of libselinux.so
lib/ansible/module_utils/distro/_distro.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/distro/_distro.py no-assert
lib/ansible/module_utils/distro/_distro.py pylint:using-constant-test # bundled code we don't want to modify
lib/ansible/module_utils/distro/_distro.py pep8!skip # bundled code we don't want to modify
lib/ansible/module_utils/distro/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/facts/__init__.py empty-init # breaks namespacing, deprecate and eventually remove
lib/ansible/module_utils/facts/network/linux.py pylint:disallowed-name
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.ArgvParser.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSProvideCommentHelp # need to agree on best format for comment location
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.CommandUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.FileUtil.psm1 pslint:PSProvideCommentHelp
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSCustomUseLiteralPath
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/powershell/Ansible.ModuleUtils.LinkUtil.psm1 pslint:PSUseApprovedVerbs
lib/ansible/module_utils/pycompat24.py no-get-exception
lib/ansible/module_utils/six/__init__.py empty-init # breaks namespacing, bundled, do not override
lib/ansible/module_utils/six/__init__.py future-import-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py metaclass-boilerplate # ignore bundled
lib/ansible/module_utils/six/__init__.py no-basestring
lib/ansible/module_utils/six/__init__.py no-dict-iteritems
lib/ansible/module_utils/six/__init__.py no-dict-iterkeys
lib/ansible/module_utils/six/__init__.py no-dict-itervalues
lib/ansible/module_utils/six/__init__.py pylint:self-assigning-variable
lib/ansible/module_utils/six/__init__.py replace-urlopen
lib/ansible/module_utils/urls.py pylint:arguments-renamed
lib/ansible/module_utils/urls.py pylint:disallowed-name
lib/ansible/module_utils/urls.py replace-urlopen
lib/ansible/parsing/vault/__init__.py pylint:disallowed-name
lib/ansible/parsing/yaml/objects.py pylint:arguments-renamed
lib/ansible/playbook/base.py pylint:disallowed-name
lib/ansible/playbook/collectionsearch.py required-and-default-attributes # https://github.com/ansible/ansible/issues/61460
lib/ansible/playbook/helpers.py pylint:disallowed-name
lib/ansible/playbook/playbook_include.py pylint:arguments-renamed
lib/ansible/playbook/role/include.py pylint:arguments-renamed
lib/ansible/plugins/action/normal.py action-plugin-docs # default action plugin for modules without a dedicated action plugin
lib/ansible/plugins/cache/base.py ansible-doc!skip # not a plugin, but a stub for backwards compatibility
lib/ansible/plugins/callback/__init__.py pylint:arguments-renamed
lib/ansible/plugins/inventory/advanced_host_list.py pylint:arguments-renamed
lib/ansible/plugins/inventory/host_list.py pylint:arguments-renamed
lib/ansible/plugins/lookup/random_choice.py pylint:arguments-renamed
lib/ansible/plugins/lookup/sequence.py pylint:disallowed-name
lib/ansible/plugins/shell/cmd.py pylint:arguments-renamed
lib/ansible/plugins/strategy/__init__.py pylint:disallowed-name
lib/ansible/plugins/strategy/linear.py pylint:disallowed-name
lib/ansible/utils/collection_loader/_collection_finder.py pylint:deprecated-class
lib/ansible/utils/collection_loader/_collection_meta.py pylint:deprecated-class
lib/ansible/vars/hostvars.py pylint:disallowed-name
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-function # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import-from # ignore, required for testing
test/integration/targets/ansible-test-sanity/ansible_collections/ns/col/tests/integration/targets/hello/files/bad.py pylint:ansible-bad-import # ignore, required for testing
test/integration/targets/ansible-test-integration/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-units/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/plugins/modules/hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/modules/test_hello.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-docker/ansible_collections/ns/col/tests/unit/plugins/module_utils/test_my_util.py pylint:relative-beyond-top-level
test/integration/targets/ansible-test-no-tty/ansible_collections/ns/col/vendored_pty.py pep8!skip # vendored code
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/modules/my_module.py pylint:relative-beyond-top-level
test/integration/targets/collections_relative_imports/collection_root/ansible_collections/my_ns/my_col/plugins/module_utils/my_util2.py pylint:relative-beyond-top-level
test/integration/targets/gathering_facts/library/bogus_facts shebang
test/integration/targets/gathering_facts/library/facts_one shebang
test/integration/targets/gathering_facts/library/facts_two shebang
test/integration/targets/incidental_win_reboot/templates/post_reboot.ps1 pslint!skip
test/integration/targets/json_cleanup/library/bad_json shebang
test/integration/targets/lookup_csvfile/files/crlf.csv line-endings
test/integration/targets/lookup_ini/lookup-8859-15.ini no-smart-quotes
test/integration/targets/module_precedence/lib_with_extension/a.ini shebang
test/integration/targets/module_precedence/lib_with_extension/ping.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/a.ini shebang
test/integration/targets/module_precedence/roles_with_extension/foo/library/ping.ini shebang
test/integration/targets/module_utils/library/test.py future-import-boilerplate # allow testing of Python 2.x implicit relative imports
test/integration/targets/module_utils/module_utils/bar0/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/foo.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/bar.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/sub/bar/__init__.py pylint:disallowed-name
test/integration/targets/module_utils/module_utils/yak/zebra/foo.py pylint:disallowed-name
test/integration/targets/old_style_modules_posix/library/helloworld.sh shebang
test/integration/targets/template/files/encoding_1252_utf-8.expected no-smart-quotes
test/integration/targets/template/files/encoding_1252_windows-1252.expected no-smart-quotes
test/integration/targets/template/files/foo.dos.txt line-endings
test/integration/targets/template/templates/encoding_1252.j2 no-smart-quotes
test/integration/targets/unicode/unicode.yml no-smart-quotes
test/integration/targets/windows-minimal/library/win_ping_syntax_error.ps1 pslint!skip
test/integration/targets/win_exec_wrapper/library/test_fail.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_exec_wrapper/tasks/main.yml no-smart-quotes # We are explicitly testing smart quote support for env vars
test/integration/targets/win_fetch/tasks/main.yml no-smart-quotes # We are explictly testing smart quotes in the file name to fetch
test/integration/targets/win_module_utils/library/legacy_only_new_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_module_utils/library/legacy_only_old_way_win_line_ending.ps1 line-endings # Explicitly tests that we still work with Windows line endings
test/integration/targets/win_script/files/test_script.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_removes_file.ps1 pslint:PSCustomUseLiteralPath
test/integration/targets/win_script/files/test_script_with_args.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/integration/targets/win_script/files/test_script_with_splatting.ps1 pslint:PSAvoidUsingWriteHost # Keep
test/lib/ansible_test/_data/requirements/sanity.pslint.ps1 pslint:PSCustomUseLiteralPath # Uses wildcards on purpose
test/lib/ansible_test/_util/target/setup/ConfigureRemotingForAnsible.ps1 pslint:PSCustomUseLiteralPath
test/lib/ansible_test/_util/target/setup/requirements.py replace-urlopen
test/support/integration/plugins/modules/timezone.py pylint:disallowed-name
test/support/integration/plugins/module_utils/compat/ipaddress.py future-import-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py metaclass-boilerplate
test/support/integration/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/integration/plugins/module_utils/network/common/utils.py future-import-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py metaclass-boilerplate
test/support/integration/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/netconf/netconf.py pylint:used-before-assignment
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/filter/network.py pylint:consider-using-dict-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py no-unicode-literals
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/compat/ipaddress.py pep8:E203
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/facts/facts.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/module_utils/network/common/utils.py pylint:use-a-generator
test/support/network-integration/collections/ansible_collections/ansible/netcommon/plugins/netconf/default.py pylint:unnecessary-comprehension
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/cliconf/ios.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/cisco/ios/plugins/modules/ios_config.py pep8:E501
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/cliconf/vyos.py pylint:arguments-renamed
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pep8:E231
test/support/network-integration/collections/ansible_collections/vyos/vyos/plugins/modules/vyos_command.py pylint:disallowed-name
test/support/windows-integration/plugins/action/win_copy.py pylint:used-before-assignment
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/module_utils/WebRequest.psm1 pslint!skip
test/support/windows-integration/collections/ansible_collections/ansible/windows/plugins/modules/win_uri.ps1 pslint!skip
test/support/windows-integration/plugins/modules/async_status.ps1 pslint!skip
test/support/windows-integration/plugins/modules/setup.ps1 pslint!skip
test/support/windows-integration/plugins/modules/slurp.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_acl.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_certificate_store.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_command.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_copy.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_file.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_get_url.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_lineinfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_regedit.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_shell.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_stat.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_tempfile.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user_right.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_user.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_wait_for.ps1 pslint!skip
test/support/windows-integration/plugins/modules/win_whoami.ps1 pslint!skip
test/units/executor/test_play_iterator.py pylint:disallowed-name
test/units/modules/test_apt.py pylint:disallowed-name
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-no-version
test/units/module_utils/basic/test_deprecate_warn.py pylint:ansible-deprecated-version
test/units/module_utils/basic/test_run_command.py pylint:disallowed-name
test/units/module_utils/urls/fixtures/multipart.txt line-endings # Fixture for HTTP tests that use CRLF
test/units/module_utils/urls/test_fetch_url.py replace-urlopen
test/units/module_utils/urls/test_gzip.py replace-urlopen
test/units/module_utils/urls/test_Request.py replace-urlopen
test/units/parsing/vault/test_vault.py pylint:disallowed-name
test/units/playbook/role/test_role.py pylint:disallowed-name
test/units/plugins/test_plugins.py pylint:disallowed-name
test/units/template/test_templar.py pylint:disallowed-name
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/action/my_action.py pylint:relative-beyond-top-level
test/units/utils/collection_loader/fixtures/collections/ansible_collections/testns/testcoll/plugins/modules/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/ansible/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/fixtures/collections_masked/ansible_collections/testns/testcoll/__init__.py empty-init # testing that collections don't need inits
test/units/utils/collection_loader/test_collection_loader.py pylint:undefined-variable # magic runtime local var splatting
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,404 |
KeyError: 'getpwuid(): uid not found: 1000' (again - Ansible 2.9.27)
|
### Summary
As previous report, I try to use Ansible through container (Kubernetes - GKE) through Jenkins.
So I have a custom image with Ansible:
```
[root@8a6fe05d3b73 /]# ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
Then, I start a Pod with UID `1000` (Jenkins default UID) and try to run Kubespray `v2.17.1` (https://github.com/kubernetes-sigs/kubespray/tree/v2.17.1). And it fails with logs:
```
PLAY [localhost]
***************************************************************
Thursday 17 November 2022 16:03:39 +0000 (0:00:00.136) 0:00:00.136
*****
/usr/lib/python3.6/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't
match a supported version!
RequestsDependencyWarning)
TASK [Check 2.9.0 <= Ansible version < 2.11.0]
*********************************
An exception occurred during task execution. To see the full traceback, use
-vvv. The error was: KeyError: 'getpwuid(): uid not found: 1000'
fatal: [localhost]: FAILED! => {"msg": "Unexpected failure during module
execution.", "stdout": ""}
PLAY RECAP
*********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
skipped=0 rescued=0 ignored=0
Thursday 17 November 2022 16:03:39 +0000 (0:00:00.043) 0:00:00.180
*****
===============================================================================
Check 2.9.0 <= Ansible version < 2.11.0 ---------------------------------
0.04s
```
The related play code (https://github.com/kubernetes-sigs/kubespray/blob/v2.17.1/ansible_version.yml):
```
---
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"
that: "'127.0.0.1' | ipaddr"
tags:
- check
# CentOS 7 provides too old jinja version
- name: "Check that jinja is not too old (install via pip)"
assert:
msg: "Your Jinja version is too old, install via pip"
that: "{% set test %}It works{% endset %}{{ test == 'It works' }}"
tags:
- check
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed
```
### OS / Environment
- Kubernetes
- uname: `Linux 8a6fe05d3b73 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
- /etc/os-release
```
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
https://github.com/kubernetes-sigs/kubespray/blob/v2.17.1/ansible_version.yml:
```yaml (paste below)
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"
that: "'127.0.0.1' | ipaddr"
tags:
- check
# CentOS 7 provides too old jinja version
- name: "Check that jinja is not too old (install via pip)"
assert:
msg: "Your Jinja version is too old, install via pip"
that: "{% set test %}It works{% endset %}{{ test == 'It works' }}"
tags:
- check
```
### Expected Results
To work ^^
### Actual Results
```console
ansible-playbook 2.9.27
config file = /home/jenkins/agent/workspace/kubespray/ansible.cfg
configured module search path = ['/home/jenkins/agent/workspace/kubespray/library']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
Using /home/jenkins/agent/workspace/kubespray/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/jenkins/agent/workspace/kubespray/hosts as it did not pass its verify_file() method
auto declined parsing /home/jenkins/agent/workspace/kubespray/hosts as it did not pass its verify_file() method
Parsed /home/jenkins/agent/workspace/kubespray/hosts inventory source with ini plugin
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0040-set_facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0050-create_directories.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0060-resolvconf.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0061-systemd-resolved.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0062-networkmanager-unmanaged-devices.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0063-networkmanager-dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0100-dhclient-hooks.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0110-dhclient-hooks-undo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0120-growpart-azure-centos-7.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/cri-o/tasks/crio_repo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/docker/tasks/pre-upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/docker/tasks/systemd.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/etcd/handlers/backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/etcd/handlers/backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/pre_upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/install.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/loadbalancer/nginx-proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/loadbalancer/haproxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/kubelet.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/tokens/tasks/check-tokens.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/tokens/tasks/gen_tokens.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/pre-upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/encrypt-at-rest.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-setup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/cilium/tasks/check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/pre.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/repos.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/kube-router/tasks/annotate.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/cleanup_dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/coredns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/nodelocaldns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/netchecker.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/dashboard.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/krew/tasks/krew.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/krew/tasks/krew.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/cloud_controller/oci/tasks/credentials-check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0040-set_facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0050-create_directories.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0060-resolvconf.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0061-systemd-resolved.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0062-networkmanager-unmanaged-devices.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0063-networkmanager-dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0100-dhclient-hooks.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0110-dhclient-hooks-undo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0120-growpart-azure-centos-7.yml
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.6/site-packages/ansible/plugins/callback/default.py
Attempting to use 'actionable' callback.
Skipping callback 'actionable', as we already have a stdout callback.
Attempting to use 'aws_resource_actions' callback.
Attempting to use 'cgroup_memory_recap' callback.
Attempting to use 'cgroup_perf_recap' callback.
Attempting to use 'context_demo' callback.
Attempting to use 'counter_enabled' callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Attempting to use 'debug' callback.
Skipping callback 'debug', as we already have a stdout callback.
Attempting to use 'dense' callback.
Skipping callback 'dense', as we already have a stdout callback.
Attempting to use 'dense' callback.
Skipping callback 'dense', as we already have a stdout callback.
Attempting to use 'foreman' callback.
Attempting to use 'full_skip' callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Attempting to use 'grafana_annotations' callback.
Attempting to use 'hipchat' callback.
Attempting to use 'jabber' callback.
Attempting to use 'json' callback.
Skipping callback 'json', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'log_plays' callback.
Attempting to use 'logdna' callback.
Attempting to use 'logentries' callback.
Attempting to use 'logstash' callback.
Attempting to use 'mail' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'nrdp' callback.
Attempting to use 'null' callback.
Skipping callback 'null', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'osx_say' callback.
Attempting to use 'profile_roles' callback.
Attempting to use 'profile_tasks' callback.
Loading callback plugin profile_tasks of type aggregate, v2.0 from /usr/lib/python3.6/site-packages/ansible/plugins/callback/profile_tasks.py
Attempting to use 'say' callback.
Attempting to use 'selective' callback.
Skipping callback 'selective', as we already have a stdout callback.
Attempting to use 'skippy' callback.
Skipping callback 'skippy', as we already have a stdout callback.
Attempting to use 'slack' callback.
Attempting to use 'splunk' callback.
Attempting to use 'stderr' callback.
Skipping callback 'stderr', as we already have a stdout callback.
Attempting to use 'sumologic' callback.
Attempting to use 'syslog_json' callback.
Attempting to use 'timer' callback.
Attempting to use 'tree' callback.
Attempting to use 'unixy' callback.
Skipping callback 'unixy', as we already have a stdout callback.
Attempting to use 'yaml' callback.
Skipping callback 'yaml', as we already have a stdout callback.
PLAYBOOK: cluster.yml **********************************************************
Positional arguments: cluster.yml
verbosity: 5
private_key_file: /home/jenkins/agent/workspace/kubespray/ssh.key
remote_user: management-user
connection: smart
timeout: 10
become: True
become_method: sudo
tags: ('all',)
inventory: ('/home/jenkins/agent/workspace/kubespray/hosts',)
extra_vars: ('managed_zone_name=company-zone', 'managed_zone=company.com', 'zone=europe-west1-b', 'project_id=company-gcp-project', 'configurator_release=./', 'kube_version=v1.19.3', "{ 'supplementary_addresses_in_ssl_keys': ['1.2.3.4', 'env.company.com']}", 'podsecuritypolicy_enabled=true')
forks: 5
19 plays in cluster.yml
PLAY [localhost] ***************************************************************
META: ran handlers
Thursday 17 November 2022 18:07:04 +0000 (0:00:00.099) 0:00:00.099 *****
/usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
TASK [Check 2.9.0 <= Ansible version < 2.11.0] *********************************
task path: /home/jenkins/agent/workspace/kubespray/ansible_version.yml:12
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 147, in run
res = self._execute()
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 620, in _execute
self._connection = self._get_connection(cvars, templar)
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 913, in _get_connection
ansible_playbook_pid=to_text(os.getppid())
File "/usr/lib/python3.6/site-packages/ansible/plugins/loader.py", line 573, in get
obj = obj(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/local.py", line 47, in __init__
self.default_user = getpass.getuser()
File "/usr/lib64/python3.6/getpass.py", line 169, in getuser
return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Thursday 17 November 2022 18:07:04 +0000 (0:00:00.047) 0:00:00.146 *****
===============================================================================
Check 2.9.0 <= Ansible version < 2.11.0 --------------------------------- 0.05s
/home/jenkins/agent/workspace/kubespray/ansible_version.yml:12
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79404
|
https://github.com/ansible/ansible/pull/79414
|
38fe34244ca166418a882cc6e191ccff3a8fc9d1
|
5f3a6b78db093f8d1b062bbd70ac6bf375fdca04
| 2022-11-17T18:04:36Z |
python
| 2022-11-29T15:08:32Z |
changelogs/fragments/local_bad_user.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,404 |
KeyError: 'getpwuid(): uid not found: 1000' (again - Ansible 2.9.27)
|
### Summary
As previous report, I try to use Ansible through container (Kubernetes - GKE) through Jenkins.
So I have a custom image with Ansible:
```
[root@8a6fe05d3b73 /]# ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
Then, I start a Pod with UID `1000` (Jenkins default UID) and try to run Kubespray `v2.17.1` (https://github.com/kubernetes-sigs/kubespray/tree/v2.17.1). And it fails with logs:
```
PLAY [localhost]
***************************************************************
Thursday 17 November 2022 16:03:39 +0000 (0:00:00.136) 0:00:00.136
*****
/usr/lib/python3.6/site-packages/requests/__init__.py:91:
RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't
match a supported version!
RequestsDependencyWarning)
TASK [Check 2.9.0 <= Ansible version < 2.11.0]
*********************************
An exception occurred during task execution. To see the full traceback, use
-vvv. The error was: KeyError: 'getpwuid(): uid not found: 1000'
fatal: [localhost]: FAILED! => {"msg": "Unexpected failure during module
execution.", "stdout": ""}
PLAY RECAP
*********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1
skipped=0 rescued=0 ignored=0
Thursday 17 November 2022 16:03:39 +0000 (0:00:00.043) 0:00:00.180
*****
===============================================================================
Check 2.9.0 <= Ansible version < 2.11.0 ---------------------------------
0.04s
```
The related play code (https://github.com/kubernetes-sigs/kubespray/blob/v2.17.1/ansible_version.yml):
```
---
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"
that: "'127.0.0.1' | ipaddr"
tags:
- check
# CentOS 7 provides too old jinja version
- name: "Check that jinja is not too old (install via pip)"
assert:
msg: "Your Jinja version is too old, install via pip"
that: "{% set test %}It works{% endset %}{{ test == 'It works' }}"
tags:
- check
```
### Issue Type
Bug Report
### Component Name
core
### Ansible Version
```console
$ ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed
```
### OS / Environment
- Kubernetes
- uname: `Linux 8a6fe05d3b73 5.15.0-53-generic #59-Ubuntu SMP Mon Oct 17 18:53:30 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux`
- /etc/os-release
```
NAME="CentOS Stream"
VERSION="8"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="8"
PLATFORM_ID="platform:el8"
PRETTY_NAME="CentOS Stream 8"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:8"
HOME_URL="https://centos.org/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_SUPPORT_PRODUCT_VERSION="CentOS Stream"
```
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
https://github.com/kubernetes-sigs/kubespray/blob/v2.17.1/ansible_version.yml:
```yaml (paste below)
- hosts: localhost
gather_facts: false
become: no
vars:
minimal_ansible_version: 2.9.0
minimal_ansible_version_2_10: 2.10.11
maximal_ansible_version: 2.11.0
ansible_connection: local
tags: always
tasks:
- name: "Check {{ minimal_ansible_version }} <= Ansible version < {{ maximal_ansible_version }}"
assert:
msg: "Ansible must be between {{ minimal_ansible_version }} and {{ maximal_ansible_version }}"
that:
- ansible_version.string is version(minimal_ansible_version, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
tags:
- check
- name: "Check Ansible version > {{ minimal_ansible_version_2_10 }} when using ansible 2.10"
assert:
msg: "When using Ansible 2.10, the minimum supported version is {{ minimal_ansible_version_2_10 }}"
that:
- ansible_version.string is version(minimal_ansible_version_2_10, ">=")
- ansible_version.string is version(maximal_ansible_version, "<")
when:
- ansible_version.string is version('2.10.0', ">=")
tags:
- check
- name: "Check that python netaddr is installed"
assert:
msg: "Python netaddr is not present"
that: "'127.0.0.1' | ipaddr"
tags:
- check
# CentOS 7 provides too old jinja version
- name: "Check that jinja is not too old (install via pip)"
assert:
msg: "Your Jinja version is too old, install via pip"
that: "{% set test %}It works{% endset %}{{ test == 'It works' }}"
tags:
- check
```
### Expected Results
To work ^^
### Actual Results
```console
ansible-playbook 2.9.27
config file = /home/jenkins/agent/workspace/kubespray/ansible.cfg
configured module search path = ['/home/jenkins/agent/workspace/kubespray/library']
ansible python module location = /usr/lib/python3.6/site-packages/ansible
executable location = /usr/bin/ansible-playbook
python version = 3.6.8 (default, Oct 19 2021, 05:14:06) [GCC 8.5.0 20210514 (Red Hat 8.5.0-3)]
Using /home/jenkins/agent/workspace/kubespray/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /home/jenkins/agent/workspace/kubespray/hosts as it did not pass its verify_file() method
auto declined parsing /home/jenkins/agent/workspace/kubespray/hosts as it did not pass its verify_file() method
Parsed /home/jenkins/agent/workspace/kubespray/hosts inventory source with ini plugin
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0040-set_facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0050-create_directories.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0060-resolvconf.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0061-systemd-resolved.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0062-networkmanager-unmanaged-devices.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0063-networkmanager-dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0100-dhclient-hooks.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0110-dhclient-hooks-undo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0120-growpart-azure-centos-7.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/cri-o/tasks/crio_repo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/docker/tasks/pre-upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/container-engine/docker/tasks/systemd.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/etcd/handlers/backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/etcd/handlers/backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/pre_upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/install.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/loadbalancer/nginx-proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/loadbalancer/haproxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/node/tasks/kubelet.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/tokens/tasks/check-tokens.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/tokens/tasks/gen_tokens.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/pre-upgrade.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/encrypt-at-rest.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-setup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/control-plane/tasks/kubeadm-backup.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/cilium/tasks/check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/pre.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/calico/tasks/repos.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/network_plugin/kube-router/tasks/annotate.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/cleanup_dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/coredns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/nodelocaldns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/netchecker.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/ansible/tasks/dashboard.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/krew/tasks/krew.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/krew/tasks/krew.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes-apps/cloud_controller/oci/tasks/credentials-check.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/download/tasks/prep_download.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/fallback_ips.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubespray-defaults/tasks/no_proxy.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0010-swapoff.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0020-verify-settings.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0040-set_facts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0050-create_directories.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0060-resolvconf.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0061-systemd-resolved.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0062-networkmanager-unmanaged-devices.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0063-networkmanager-dns.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0070-system-packages.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0080-system-configurations.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0090-etchosts.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0100-dhclient-hooks.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0110-dhclient-hooks-undo.yml
statically imported: /home/jenkins/agent/workspace/kubespray/roles/kubernetes/preinstall/tasks/0120-growpart-azure-centos-7.yml
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python3.6/site-packages/ansible/plugins/callback/default.py
Attempting to use 'actionable' callback.
Skipping callback 'actionable', as we already have a stdout callback.
Attempting to use 'aws_resource_actions' callback.
Attempting to use 'cgroup_memory_recap' callback.
Attempting to use 'cgroup_perf_recap' callback.
Attempting to use 'context_demo' callback.
Attempting to use 'counter_enabled' callback.
Skipping callback 'counter_enabled', as we already have a stdout callback.
Attempting to use 'debug' callback.
Skipping callback 'debug', as we already have a stdout callback.
Attempting to use 'dense' callback.
Skipping callback 'dense', as we already have a stdout callback.
Attempting to use 'dense' callback.
Skipping callback 'dense', as we already have a stdout callback.
Attempting to use 'foreman' callback.
Attempting to use 'full_skip' callback.
Skipping callback 'full_skip', as we already have a stdout callback.
Attempting to use 'grafana_annotations' callback.
Attempting to use 'hipchat' callback.
Attempting to use 'jabber' callback.
Attempting to use 'json' callback.
Skipping callback 'json', as we already have a stdout callback.
Attempting to use 'junit' callback.
Attempting to use 'log_plays' callback.
Attempting to use 'logdna' callback.
Attempting to use 'logentries' callback.
Attempting to use 'logstash' callback.
Attempting to use 'mail' callback.
Attempting to use 'minimal' callback.
Skipping callback 'minimal', as we already have a stdout callback.
Attempting to use 'nrdp' callback.
Attempting to use 'null' callback.
Skipping callback 'null', as we already have a stdout callback.
Attempting to use 'oneline' callback.
Skipping callback 'oneline', as we already have a stdout callback.
Attempting to use 'osx_say' callback.
Attempting to use 'profile_roles' callback.
Attempting to use 'profile_tasks' callback.
Loading callback plugin profile_tasks of type aggregate, v2.0 from /usr/lib/python3.6/site-packages/ansible/plugins/callback/profile_tasks.py
Attempting to use 'say' callback.
Attempting to use 'selective' callback.
Skipping callback 'selective', as we already have a stdout callback.
Attempting to use 'skippy' callback.
Skipping callback 'skippy', as we already have a stdout callback.
Attempting to use 'slack' callback.
Attempting to use 'splunk' callback.
Attempting to use 'stderr' callback.
Skipping callback 'stderr', as we already have a stdout callback.
Attempting to use 'sumologic' callback.
Attempting to use 'syslog_json' callback.
Attempting to use 'timer' callback.
Attempting to use 'tree' callback.
Attempting to use 'unixy' callback.
Skipping callback 'unixy', as we already have a stdout callback.
Attempting to use 'yaml' callback.
Skipping callback 'yaml', as we already have a stdout callback.
PLAYBOOK: cluster.yml **********************************************************
Positional arguments: cluster.yml
verbosity: 5
private_key_file: /home/jenkins/agent/workspace/kubespray/ssh.key
remote_user: management-user
connection: smart
timeout: 10
become: True
become_method: sudo
tags: ('all',)
inventory: ('/home/jenkins/agent/workspace/kubespray/hosts',)
extra_vars: ('managed_zone_name=company-zone', 'managed_zone=company.com', 'zone=europe-west1-b', 'project_id=company-gcp-project', 'configurator_release=./', 'kube_version=v1.19.3', "{ 'supplementary_addresses_in_ssl_keys': ['1.2.3.4', 'env.company.com']}", 'podsecuritypolicy_enabled=true')
forks: 5
19 plays in cluster.yml
PLAY [localhost] ***************************************************************
META: ran handlers
Thursday 17 November 2022 18:07:04 +0000 (0:00:00.099) 0:00:00.099 *****
/usr/lib/python3.6/site-packages/requests/__init__.py:91: RequestsDependencyWarning: urllib3 (1.26.8) or chardet (3.0.4) doesn't match a supported version!
RequestsDependencyWarning)
TASK [Check 2.9.0 <= Ansible version < 2.11.0] *********************************
task path: /home/jenkins/agent/workspace/kubespray/ansible_version.yml:12
The full traceback is:
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 147, in run
res = self._execute()
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 620, in _execute
self._connection = self._get_connection(cvars, templar)
File "/usr/lib/python3.6/site-packages/ansible/executor/task_executor.py", line 913, in _get_connection
ansible_playbook_pid=to_text(os.getppid())
File "/usr/lib/python3.6/site-packages/ansible/plugins/loader.py", line 573, in get
obj = obj(*args, **kwargs)
File "/usr/lib/python3.6/site-packages/ansible/plugins/connection/local.py", line 47, in __init__
self.default_user = getpass.getuser()
File "/usr/lib64/python3.6/getpass.py", line 169, in getuser
return pwd.getpwuid(os.getuid())[0]
KeyError: 'getpwuid(): uid not found: 1000'
fatal: [localhost]: FAILED! => {
"msg": "Unexpected failure during module execution.",
"stdout": ""
}
PLAY RECAP *********************************************************************
localhost : ok=0 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Thursday 17 November 2022 18:07:04 +0000 (0:00:00.047) 0:00:00.146 *****
===============================================================================
Check 2.9.0 <= Ansible version < 2.11.0 --------------------------------- 0.05s
/home/jenkins/agent/workspace/kubespray/ansible_version.yml:12
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79404
|
https://github.com/ansible/ansible/pull/79414
|
38fe34244ca166418a882cc6e191ccff3a8fc9d1
|
5f3a6b78db093f8d1b062bbd70ac6bf375fdca04
| 2022-11-17T18:04:36Z |
python
| 2022-11-29T15:08:32Z |
lib/ansible/plugins/connection/local.py
|
# (c) 2012, Michael DeHaan <[email protected]>
# (c) 2015, 2017 Toshio Kuratomi <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: local
short_description: execute on controller
description:
- This connection plugin allows ansible to execute tasks on the Ansible 'controller' instead of on a remote host.
author: ansible (@core)
version_added: historical
extends_documentation_fragment:
- connection_pipelining
notes:
- The remote user is ignored, the user with which the ansible CLI was executed is used instead.
'''
import os
import pty
import shutil
import subprocess
import fcntl
import getpass
import ansible.constants as C
from ansible.errors import AnsibleError, AnsibleFileNotFound
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import text_type, binary_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath
display = Display()
class Connection(ConnectionBase):
''' Local based connections '''
transport = 'local'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
self.cwd = None
self.default_user = getpass.getuser()
def _connect(self):
''' connect to the local host; nothing to do here '''
# Because we haven't made any remote connection we're running as
# the local user, rather than as whatever is configured in remote_user.
self._play_context.remote_user = self.default_user
if not self._connected:
display.vvv(u"ESTABLISH LOCAL CONNECTION FOR USER: {0}".format(self._play_context.remote_user), host=self._play_context.remote_addr)
self._connected = True
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the local host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.debug("in local.exec_command()")
executable = C.DEFAULT_EXECUTABLE.split()[0] if C.DEFAULT_EXECUTABLE else None
if not os.path.exists(to_bytes(executable, errors='surrogate_or_strict')):
raise AnsibleError("failed to find the executable specified %s."
" Please verify if the executable exists and re-try." % executable)
display.vvv(u"EXEC {0}".format(to_text(cmd)), host=self._play_context.remote_addr)
display.debug("opening command with Popen()")
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = map(to_bytes, cmd)
master = None
stdin = subprocess.PIPE
if sudoable and self.become and self.become.expect_prompt() and not self.get_option('pipelining'):
# Create a pty if sudoable for privlege escalation that needs it.
# Falls back to using a standard pipe if this fails, which may
# cause the command to fail in certain situations where we are escalating
# privileges or the command otherwise needs a pty.
try:
master, stdin = pty.openpty()
except (IOError, OSError) as e:
display.debug("Unable to open pty: %s" % to_native(e))
p = subprocess.Popen(
cmd,
shell=isinstance(cmd, (text_type, binary_type)),
executable=executable,
cwd=self.cwd,
stdin=stdin,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
)
# if we created a master, we can close the other half of the pty now, otherwise master is stdin
if master is not None:
os.close(stdin)
display.debug("done running command with Popen()")
if self.become and self.become.expect_prompt() and sudoable:
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) | os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) | os.O_NONBLOCK)
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
become_output = b''
try:
while not self.become.check_success(become_output) and not self.become.check_password_prompt(become_output):
events = selector.select(self._play_context.timeout)
if not events:
stdout, stderr = p.communicate()
raise AnsibleError('timeout waiting for privilege escalation password prompt:\n' + to_native(become_output))
for key, event in events:
if key.fileobj == p.stdout:
chunk = p.stdout.read()
elif key.fileobj == p.stderr:
chunk = p.stderr.read()
if not chunk:
stdout, stderr = p.communicate()
raise AnsibleError('privilege output closed while waiting for password prompt:\n' + to_native(become_output))
become_output += chunk
finally:
selector.close()
if not self.become.check_success(become_output):
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
if master is None:
p.stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
else:
os.write(master, to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
fcntl.fcntl(p.stdout, fcntl.F_SETFL, fcntl.fcntl(p.stdout, fcntl.F_GETFL) & ~os.O_NONBLOCK)
fcntl.fcntl(p.stderr, fcntl.F_SETFL, fcntl.fcntl(p.stderr, fcntl.F_GETFL) & ~os.O_NONBLOCK)
display.debug("getting output with communicate()")
stdout, stderr = p.communicate(in_data)
display.debug("done communicating")
# finally, close the other half of the pty, if it was created
if master:
os.close(master)
display.debug("done with local.exec_command()")
return (p.returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to local '''
super(Connection, self).put_file(in_path, out_path)
in_path = unfrackpath(in_path, basedir=self.cwd)
out_path = unfrackpath(out_path, basedir=self.cwd)
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
try:
shutil.copyfile(to_bytes(in_path, errors='surrogate_or_strict'), to_bytes(out_path, errors='surrogate_or_strict'))
except shutil.Error:
raise AnsibleError("failed to copy: {0} and {1} are the same".format(to_native(in_path), to_native(out_path)))
except IOError as e:
raise AnsibleError("failed to transfer file to {0}: {1}".format(to_native(out_path), to_native(e)))
def fetch_file(self, in_path, out_path):
''' fetch a file from local to local -- for compatibility '''
super(Connection, self).fetch_file(in_path, out_path)
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self._play_context.remote_addr)
self.put_file(in_path, out_path)
def close(self):
''' terminate the connection; nothing to do here '''
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,430 |
password lookup rewrites file when using `encrypt`
|
### Summary
The password lookup rewrites the password file with the same content if the `encrypt` option is used. This happens on every invocation, not only if the `salt` or `ident` values are added to the file.
This bug was introduced in commit 1bd7dcf339d and is caused by this code in `lib/ansible/plugins/lookup/password.py`:
```
ident = params['ident']
if encrypt and not ident:
changed = True
try:
ident = BaseHash.algorithms[encrypt].implicit_ident
except KeyError:
ident = None
```
While this bug seems minor as only the file modification time is changed and the actual file contents remain the same, it's quite annoying if the files are stored on an encrypted overlay filesystem like encfs as there the encrypted file content changes. It also confuses backup and sync tools which monitor the modification time.
### Issue Type
Bug Report
### Component Name
password
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 1bda6750f5) last updated 2022/11/21 15:47:49 (GMT +200)
config file = /home/gaudenz/.ansible.cfg
configured module search path = ['/home/gaudenz/.ansible/plugins/library']
ansible python module location = /home/gaudenz/projects/ansible/ansible-core/lib/ansible
ansible collection location = /home/gaudenz/projects/ansible/ansible-core/collections:/usr/share/ansible/collections
executable location = /home/gaudenz/projects/ansible/ansible-core/bin/ansible
python version = 3.10.8 (main, Nov 4 2022, 09:21:25) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_HOME(env: ANSIBLE_HOME) = /home/gaudenz/projects/ansible/ansible-core
CONFIG_FILE() = /home/gaudenz/.ansible.cfg
DEFAULT_MODULE_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/plugins/library']
DEFAULT_ROLES_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/roles']
DEFAULT_TRANSPORT(/home/gaudenz/.ansible.cfg) = ssh
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/gaudenz/.ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/gaudenz/.ansible.cfg) = False
ssh:
___
host_key_checking(/home/gaudenz/.ansible.cfg) = False
```
### OS / Environment
Debian testing
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:03.976492618 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:11.916522686 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
```
### Expected Results
The second invocation should not change the file modification time.
### Actual Results
```console
The second invocation changes the file modification time without actually changing the file content.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79430
|
https://github.com/ansible/ansible/pull/79431
|
3936b5c471068d86c3e51a454a1de2f0d2942845
|
c33a782a9c1e6d1e6b900c0eed642dfd3defac1c
| 2022-11-21T14:58:45Z |
python
| 2022-11-29T15:26:30Z |
changelogs/fragments/79431-fix-password-lookup-rewrites.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,430 |
password lookup rewrites file when using `encrypt`
|
### Summary
The password lookup rewrites the password file with the same content if the `encrypt` option is used. This happens on every invocation, not only if the `salt` or `ident` values are added to the file.
This bug was introduced in commit 1bd7dcf339d and is caused by this code in `lib/ansible/plugins/lookup/password.py`:
```
ident = params['ident']
if encrypt and not ident:
changed = True
try:
ident = BaseHash.algorithms[encrypt].implicit_ident
except KeyError:
ident = None
```
While this bug seems minor as only the file modification time is changed and the actual file contents remain the same, it's quite annoying if the files are stored on an encrypted overlay filesystem like encfs as there the encrypted file content changes. It also confuses backup and sync tools which monitor the modification time.
### Issue Type
Bug Report
### Component Name
password
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 1bda6750f5) last updated 2022/11/21 15:47:49 (GMT +200)
config file = /home/gaudenz/.ansible.cfg
configured module search path = ['/home/gaudenz/.ansible/plugins/library']
ansible python module location = /home/gaudenz/projects/ansible/ansible-core/lib/ansible
ansible collection location = /home/gaudenz/projects/ansible/ansible-core/collections:/usr/share/ansible/collections
executable location = /home/gaudenz/projects/ansible/ansible-core/bin/ansible
python version = 3.10.8 (main, Nov 4 2022, 09:21:25) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_HOME(env: ANSIBLE_HOME) = /home/gaudenz/projects/ansible/ansible-core
CONFIG_FILE() = /home/gaudenz/.ansible.cfg
DEFAULT_MODULE_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/plugins/library']
DEFAULT_ROLES_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/roles']
DEFAULT_TRANSPORT(/home/gaudenz/.ansible.cfg) = ssh
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/gaudenz/.ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/gaudenz/.ansible.cfg) = False
ssh:
___
host_key_checking(/home/gaudenz/.ansible.cfg) = False
```
### OS / Environment
Debian testing
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:03.976492618 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:11.916522686 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
```
### Expected Results
The second invocation should not change the file modification time.
### Actual Results
```console
The second invocation changes the file modification time without actually changing the file content.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79430
|
https://github.com/ansible/ansible/pull/79431
|
3936b5c471068d86c3e51a454a1de2f0d2942845
|
c33a782a9c1e6d1e6b900c0eed642dfd3defac1c
| 2022-11-21T14:58:45Z |
python
| 2022-11-29T15:26:30Z |
lib/ansible/plugins/lookup/password.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2013, Javier Candeira <[email protected]>
# (c) 2013, Maykel Moya <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = """
name: password
version_added: "1.1"
author:
- Daniel Hokka Zakrisson (!UNKNOWN) <[email protected]>
- Javier Candeira (!UNKNOWN) <[email protected]>
- Maykel Moya (!UNKNOWN) <[email protected]>
short_description: retrieve or generate a random password, stored in a file
description:
- Generates a random plaintext password and stores it in a file at a given filepath.
- If the file exists previously, it will retrieve its contents, behaving just like with_file.
- 'Usage of variables like C("{{ inventory_hostname }}") in the filepath can be used to set up random passwords per host,
which simplifies password management in C("host_vars") variables.'
- A special case is using /dev/null as a path. The password lookup will generate a new random password each time,
but will not write it to /dev/null. This can be used when you need a password without storing it on the controller.
options:
_terms:
description:
- path to the file that stores/will store the passwords
required: True
encrypt:
description:
- Which hash scheme to encrypt the returning password, should be one hash scheme from C(passlib.hash; md5_crypt, bcrypt, sha256_crypt, sha512_crypt).
- If not provided, the password will be returned in plain text.
- Note that the password is always stored as plain text, only the returning password is encrypted.
- Encrypt also forces saving the salt value for idempotence.
- Note that before 2.6 this option was incorrectly labeled as a boolean for a long time.
ident:
description:
- Specify version of Bcrypt algorithm to be used while using C(encrypt) as C(bcrypt).
- The parameter is only available for C(bcrypt) - U(https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt).
- Other hash types will simply ignore this parameter.
- 'Valid values for this parameter are: C(2), C(2a), C(2y), C(2b).'
type: string
version_added: "2.12"
chars:
version_added: "1.4"
description:
- A list of names that compose a custom character set in the generated passwords.
- 'By default generated passwords contain a random mix of upper and lowercase ASCII letters, the numbers 0-9, and punctuation (". , : - _").'
- "They can be either parts of Python's string module attributes or represented literally ( :, -)."
- "Though string modules can vary by Python version, valid values for both major releases include:
'ascii_lowercase', 'ascii_uppercase', 'digits', 'hexdigits', 'octdigits', 'printable', 'punctuation' and 'whitespace'."
- Be aware that Python's 'hexdigits' includes lower and upper case versions of a-f, so it is not a good choice as it doubles
the chances of those values for systems that won't distinguish case, distorting the expected entropy.
- "when using a comma separated string, to enter comma use two commas ',,' somewhere - preferably at the end.
Quotes and double quotes are not supported."
type: list
elements: str
default: ['ascii_letters', 'digits', ".,:-_"]
length:
description: The length of the generated password.
default: 20
type: integer
seed:
version_added: "2.12"
description:
- A seed to initialize the random number generator.
- Identical seeds will yield identical passwords.
- Use this for random-but-idempotent password generation.
type: str
notes:
- A great alternative to the password lookup plugin,
if you don't need to generate random passwords on a per-host basis,
would be to use Vault in playbooks.
Read the documentation there and consider using it first,
it will be more desirable for most applications.
- If the file already exists, no data will be written to it.
If the file has contents, those contents will be read in as the password.
Empty files cause the password to return as an empty string.
- 'As all lookups, this runs on the Ansible host as the user running the playbook, and "become" does not apply,
the target file must be readable by the playbook user, or, if it does not exist,
the playbook user must have sufficient privileges to create it.
(So, for example, attempts to write into areas such as /etc will fail unless the entire playbook is being run as root).'
"""
EXAMPLES = """
- name: create a mysql user with a random password
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', 'credentials/' + client + '/' + tier + '/' + role + '/mysqlpassword length=15') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using only ascii letters
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile chars=ascii_letters') }}"
priv: '{{ client }}_{{ tier }}_{{ role }}.*:ALL'
- name: create a mysql user with an 8 character random password using only digits
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile length=8 chars=digits') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create a mysql user with a random password using many different char sets
community.mysql.mysql_user:
name: "{{ client }}"
password: "{{ lookup('ansible.builtin.password', '/tmp/passwordfile chars=ascii_letters,digits,punctuation') }}"
priv: "{{ client }}_{{ tier }}_{{ role }}.*:ALL"
- name: create lowercase 8 character name for Kubernetes pod name
ansible.builtin.set_fact:
random_pod_name: "web-{{ lookup('ansible.builtin.password', '/dev/null chars=ascii_lowercase,digits length=8') }}"
- name: create random but idempotent password
ansible.builtin.set_fact:
password: "{{ lookup('ansible.builtin.password', '/dev/null', seed=inventory_hostname) }}"
"""
RETURN = """
_raw:
description:
- a password
type: list
elements: str
"""
import os
import string
import time
import hashlib
from ansible.errors import AnsibleError, AnsibleAssertionError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.six import string_types
from ansible.parsing.splitter import parse_kv
from ansible.plugins.lookup import LookupBase
from ansible.utils.encrypt import BaseHash, do_encrypt, random_password, random_salt
from ansible.utils.path import makedirs_safe
VALID_PARAMS = frozenset(('length', 'encrypt', 'chars', 'ident', 'seed'))
def _read_password_file(b_path):
"""Read the contents of a password file and return it
:arg b_path: A byte string containing the path to the password file
:returns: a text string containing the contents of the password file or
None if no password file was present.
"""
content = None
if os.path.exists(b_path):
with open(b_path, 'rb') as f:
b_content = f.read().rstrip()
content = to_text(b_content, errors='surrogate_or_strict')
return content
def _gen_candidate_chars(characters):
'''Generate a string containing all valid chars as defined by ``characters``
:arg characters: A list of character specs. The character specs are
shorthand names for sets of characters like 'digits', 'ascii_letters',
or 'punctuation' or a string to be included verbatim.
The values of each char spec can be:
* a name of an attribute in the 'strings' module ('digits' for example).
The value of the attribute will be added to the candidate chars.
* a string of characters. If the string isn't an attribute in 'string'
module, the string will be directly added to the candidate chars.
For example::
characters=['digits', '?|']``
will match ``string.digits`` and add all ascii digits. ``'?|'`` will add
the question mark and pipe characters directly. Return will be the string::
u'0123456789?|'
'''
chars = []
for chars_spec in characters:
# getattr from string expands things like "ascii_letters" and "digits"
# into a set of characters.
chars.append(to_text(getattr(string, to_native(chars_spec), chars_spec), errors='strict'))
chars = u''.join(chars).replace(u'"', u'').replace(u"'", u'')
return chars
def _parse_content(content):
'''parse our password data format into password and salt
:arg content: The data read from the file
:returns: password and salt
'''
password = content
salt = None
salt_slug = u' salt='
try:
sep = content.rindex(salt_slug)
except ValueError:
# No salt
pass
else:
salt = password[sep + len(salt_slug):]
password = content[:sep]
return password, salt
def _format_content(password, salt, encrypt=None, ident=None):
"""Format the password and salt for saving
:arg password: the plaintext password to save
:arg salt: the salt to use when encrypting a password
:arg encrypt: Which method the user requests that this password is encrypted.
Note that the password is saved in clear. Encrypt just tells us if we
must save the salt value for idempotence. Defaults to None.
:arg ident: Which version of BCrypt algorithm to be used.
Valid only if value of encrypt is bcrypt.
Defaults to None.
:returns: a text string containing the formatted information
.. warning:: Passwords are saved in clear. This is because the playbooks
expect to get cleartext passwords from this lookup.
"""
if not encrypt and not salt:
return password
# At this point, the calling code should have assured us that there is a salt value.
if not salt:
raise AnsibleAssertionError('_format_content was called with encryption requested but no salt value')
if ident:
return u'%s salt=%s ident=%s' % (password, salt, ident)
return u'%s salt=%s' % (password, salt)
def _write_password_file(b_path, content):
b_pathdir = os.path.dirname(b_path)
makedirs_safe(b_pathdir, mode=0o700)
with open(b_path, 'wb') as f:
os.chmod(b_path, 0o600)
b_content = to_bytes(content, errors='surrogate_or_strict') + b'\n'
f.write(b_content)
def _get_lock(b_path):
"""Get the lock for writing password file."""
first_process = False
b_pathdir = os.path.dirname(b_path)
lockfile_name = to_bytes("%s.ansible_lockfile" % hashlib.sha1(b_path).hexdigest())
lockfile = os.path.join(b_pathdir, lockfile_name)
if not os.path.exists(lockfile) and b_path != to_bytes('/dev/null'):
try:
makedirs_safe(b_pathdir, mode=0o700)
fd = os.open(lockfile, os.O_CREAT | os.O_EXCL)
os.close(fd)
first_process = True
except OSError as e:
if e.strerror != 'File exists':
raise
counter = 0
# if the lock is got by other process, wait until it's released
while os.path.exists(lockfile) and not first_process:
time.sleep(2 ** counter)
if counter >= 2:
raise AnsibleError("Password lookup cannot get the lock in 7 seconds, abort..."
"This may caused by un-removed lockfile"
"you can manually remove it from controller machine at %s and try again" % lockfile)
counter += 1
return first_process, lockfile
def _release_lock(lockfile):
"""Release the lock so other processes can read the password file."""
if os.path.exists(lockfile):
os.remove(lockfile)
class LookupModule(LookupBase):
def _parse_parameters(self, term):
"""Hacky parsing of params
See https://github.com/ansible/ansible-modules-core/issues/1968#issuecomment-136842156
and the first_found lookup For how we want to fix this later
"""
first_split = term.split(' ', 1)
if len(first_split) <= 1:
# Only a single argument given, therefore it's a path
relpath = term
params = dict()
else:
relpath = first_split[0]
params = parse_kv(first_split[1])
if '_raw_params' in params:
# Spaces in the path?
relpath = u' '.join((relpath, params['_raw_params']))
del params['_raw_params']
# Check that we parsed the params correctly
if not term.startswith(relpath):
# Likely, the user had a non parameter following a parameter.
# Reject this as a user typo
raise AnsibleError('Unrecognized value after key=value parameters given to password lookup')
# No _raw_params means we already found the complete path when
# we split it initially
# Check for invalid parameters. Probably a user typo
invalid_params = frozenset(params.keys()).difference(VALID_PARAMS)
if invalid_params:
raise AnsibleError('Unrecognized parameter(s) given to password lookup: %s' % ', '.join(invalid_params))
# Set defaults
params['length'] = int(params.get('length', self.get_option('length')))
params['encrypt'] = params.get('encrypt', self.get_option('encrypt'))
params['ident'] = params.get('ident', self.get_option('ident'))
params['seed'] = params.get('seed', self.get_option('seed'))
params['chars'] = params.get('chars', self.get_option('chars'))
if params['chars'] and isinstance(params['chars'], string_types):
tmp_chars = []
if u',,' in params['chars']:
tmp_chars.append(u',')
tmp_chars.extend(c for c in params['chars'].replace(u',,', u',').split(u',') if c)
params['chars'] = tmp_chars
return relpath, params
def run(self, terms, variables, **kwargs):
ret = []
self.set_options(var_options=variables, direct=kwargs)
for term in terms:
relpath, params = self._parse_parameters(term)
path = self._loader.path_dwim(relpath)
b_path = to_bytes(path, errors='surrogate_or_strict')
chars = _gen_candidate_chars(params['chars'])
changed = None
# make sure only one process finishes all the job first
first_process, lockfile = _get_lock(b_path)
content = _read_password_file(b_path)
if content is None or b_path == to_bytes('/dev/null'):
plaintext_password = random_password(params['length'], chars, params['seed'])
salt = None
changed = True
else:
plaintext_password, salt = _parse_content(content)
encrypt = params['encrypt']
if encrypt and not salt:
changed = True
try:
salt = random_salt(BaseHash.algorithms[encrypt].salt_size)
except KeyError:
salt = random_salt()
ident = params['ident']
if encrypt and not ident:
changed = True
try:
ident = BaseHash.algorithms[encrypt].implicit_ident
except KeyError:
ident = None
if changed and b_path != to_bytes('/dev/null'):
content = _format_content(plaintext_password, salt, encrypt=encrypt, ident=ident)
_write_password_file(b_path, content)
if first_process:
# let other processes continue
_release_lock(lockfile)
if encrypt:
password = do_encrypt(plaintext_password, encrypt, salt=salt, ident=ident)
ret.append(password)
else:
ret.append(plaintext_password)
return ret
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,430 |
password lookup rewrites file when using `encrypt`
|
### Summary
The password lookup rewrites the password file with the same content if the `encrypt` option is used. This happens on every invocation, not only if the `salt` or `ident` values are added to the file.
This bug was introduced in commit 1bd7dcf339d and is caused by this code in `lib/ansible/plugins/lookup/password.py`:
```
ident = params['ident']
if encrypt and not ident:
changed = True
try:
ident = BaseHash.algorithms[encrypt].implicit_ident
except KeyError:
ident = None
```
While this bug seems minor as only the file modification time is changed and the actual file contents remain the same, it's quite annoying if the files are stored on an encrypted overlay filesystem like encfs as there the encrypted file content changes. It also confuses backup and sync tools which monitor the modification time.
### Issue Type
Bug Report
### Component Name
password
### Ansible Version
```console
$ ansible --version
ansible [core 2.15.0.dev0] (devel 1bda6750f5) last updated 2022/11/21 15:47:49 (GMT +200)
config file = /home/gaudenz/.ansible.cfg
configured module search path = ['/home/gaudenz/.ansible/plugins/library']
ansible python module location = /home/gaudenz/projects/ansible/ansible-core/lib/ansible
ansible collection location = /home/gaudenz/projects/ansible/ansible-core/collections:/usr/share/ansible/collections
executable location = /home/gaudenz/projects/ansible/ansible-core/bin/ansible
python version = 3.10.8 (main, Nov 4 2022, 09:21:25) [GCC 12.2.0] (/usr/bin/python)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
ANSIBLE_HOME(env: ANSIBLE_HOME) = /home/gaudenz/projects/ansible/ansible-core
CONFIG_FILE() = /home/gaudenz/.ansible.cfg
DEFAULT_MODULE_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/plugins/library']
DEFAULT_ROLES_PATH(/home/gaudenz/.ansible.cfg) = ['/home/gaudenz/.ansible/roles']
DEFAULT_TRANSPORT(/home/gaudenz/.ansible.cfg) = ssh
EDITOR(env: EDITOR) = vim
HOST_KEY_CHECKING(/home/gaudenz/.ansible.cfg) = False
CONNECTION:
==========
paramiko_ssh:
____________
host_key_checking(/home/gaudenz/.ansible.cfg) = False
ssh:
___
host_key_checking(/home/gaudenz/.ansible.cfg) = False
```
### OS / Environment
Debian testing
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:03.976492618 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
$ ansible -m debug -a 'msg={{ lookup("password", "mypassword encrypt=sha512_crypt length=10") }}' localhost && ls -l --full-time mypassword && md5sum mypassword
localhost | SUCCESS => {
"msg": "$6$Ec9Z80iDXuPXULzQ$419PY89EUBGUmpbyPKpoFCoM.VfrDahpUM91EOexZRQfsVmasGZk5fDoMAMS6ymGY2cQAp7It9iAzI2lnpkCn0"
}
-rw------- 1 gaudenz gaudenz 33 2022-11-21 15:54:11.916522686 +0100 mypassword
037a9e24e504f7c50a40e6f6c562ff5f mypassword
```
### Expected Results
The second invocation should not change the file modification time.
### Actual Results
```console
The second invocation changes the file modification time without actually changing the file content.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79430
|
https://github.com/ansible/ansible/pull/79431
|
3936b5c471068d86c3e51a454a1de2f0d2942845
|
c33a782a9c1e6d1e6b900c0eed642dfd3defac1c
| 2022-11-21T14:58:45Z |
python
| 2022-11-29T15:26:30Z |
test/units/plugins/lookup/test_password.py
|
# -*- coding: utf-8 -*-
# (c) 2015, Toshio Kuratomi <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
try:
import passlib
from passlib.handlers import pbkdf2
except ImportError:
passlib = None
pbkdf2 = None
import pytest
from units.mock.loader import DictDataLoader
from units.compat import unittest
from unittest.mock import mock_open, patch
from ansible.errors import AnsibleError
from ansible.module_utils.six import text_type
from ansible.module_utils.six.moves import builtins
from ansible.module_utils._text import to_bytes
from ansible.plugins.loader import PluginLoader, lookup_loader
from ansible.plugins.lookup import password
DEFAULT_LENGTH = 20
DEFAULT_CHARS = sorted([u'ascii_letters', u'digits', u".,:-_"])
DEFAULT_CANDIDATE_CHARS = u'.,:-_abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
# Currently there isn't a new-style
old_style_params_data = (
# Simple case
dict(
term=u'/path/to/file',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Special characters in path
dict(
term=u'/path/with/embedded spaces and/file',
filename=u'/path/with/embedded spaces and/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/equals/cn=com.ansible',
filename=u'/path/with/equals/cn=com.ansible',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/unicode/くらとみ/file',
filename=u'/path/with/unicode/くらとみ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Mix several special chars
dict(
term=u'/path/with/utf 8 and spaces/くらとみ/file',
filename=u'/path/with/utf 8 and spaces/くらとみ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/encoding=unicode/くらとみ/file',
filename=u'/path/with/encoding=unicode/くらとみ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/with/encoding=unicode/くらとみ/and spaces file',
filename=u'/path/with/encoding=unicode/くらとみ/and spaces file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Simple parameters
dict(
term=u'/path/to/file length=42',
filename=u'/path/to/file',
params=dict(length=42, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/to/file encrypt=pbkdf2_sha256',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt='pbkdf2_sha256', ident=None, chars=DEFAULT_CHARS, seed=None),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
dict(
term=u'/path/to/file chars=abcdefghijklmnop',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abcdefghijklmnop'], seed=None),
candidate_chars=u'abcdefghijklmnop',
),
dict(
term=u'/path/to/file chars=digits,abc,def',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'abc', u'def']), seed=None),
candidate_chars=u'abcdef0123456789',
),
dict(
term=u'/path/to/file seed=1',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=DEFAULT_CHARS, seed='1'),
candidate_chars=DEFAULT_CANDIDATE_CHARS,
),
# Including comma in chars
dict(
term=u'/path/to/file chars=abcdefghijklmnop,,digits',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'abcdefghijklmnop', u',', u'digits']), seed=None),
candidate_chars=u',abcdefghijklmnop0123456789',
),
dict(
term=u'/path/to/file chars=,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=[u','], seed=None),
candidate_chars=u',',
),
# Including = in chars
dict(
term=u'/path/to/file chars=digits,=,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'=', u',']), seed=None),
candidate_chars=u',=0123456789',
),
dict(
term=u'/path/to/file chars=digits,abc=def',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'abc=def']), seed=None),
candidate_chars=u'abc=def0123456789',
),
# Including unicode in chars
dict(
term=u'/path/to/file chars=digits,くらとみ,,',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'digits', u'くらとみ', u',']), seed=None),
candidate_chars=u',0123456789くらとみ',
),
# Including only unicode in chars
dict(
term=u'/path/to/file chars=くらとみ',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'くらとみ']), seed=None),
candidate_chars=u'くらとみ',
),
# Include ':' in path
dict(
term=u'/path/to/file_with:colon chars=ascii_letters,digits',
filename=u'/path/to/file_with:colon',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None,
chars=sorted([u'ascii_letters', u'digits']), seed=None),
candidate_chars=u'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789',
),
# Including special chars in both path and chars
# Special characters in path
dict(
term=u'/path/with/embedded spaces and/file chars=abc=def',
filename=u'/path/with/embedded spaces and/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abc=def'], seed=None),
candidate_chars=u'abc=def',
),
dict(
term=u'/path/with/equals/cn=com.ansible chars=abc=def',
filename=u'/path/with/equals/cn=com.ansible',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'abc=def'], seed=None),
candidate_chars=u'abc=def',
),
dict(
term=u'/path/with/unicode/くらとみ/file chars=くらとみ',
filename=u'/path/with/unicode/くらとみ/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, ident=None, chars=[u'くらとみ'], seed=None),
candidate_chars=u'くらとみ',
),
)
class TestParseParameters(unittest.TestCase):
def setUp(self):
self.fake_loader = DictDataLoader({'/path/to/somewhere': 'sdfsdf'})
self.password_lookup = lookup_loader.get('password')
self.password_lookup._loader = self.fake_loader
def test(self):
for testcase in old_style_params_data:
filename, params = self.password_lookup._parse_parameters(testcase['term'])
params['chars'].sort()
self.assertEqual(filename, testcase['filename'])
self.assertEqual(params, testcase['params'])
def test_unrecognized_value(self):
testcase = dict(term=u'/path/to/file chars=くらとみi sdfsdf',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, chars=[u'くらとみ']),
candidate_chars=u'くらとみ')
self.assertRaises(AnsibleError, self.password_lookup._parse_parameters, testcase['term'])
def test_invalid_params(self):
testcase = dict(term=u'/path/to/file chars=くらとみi somethign_invalid=123',
filename=u'/path/to/file',
params=dict(length=DEFAULT_LENGTH, encrypt=None, chars=[u'くらとみ']),
candidate_chars=u'くらとみ')
self.assertRaises(AnsibleError, self.password_lookup._parse_parameters, testcase['term'])
class TestReadPasswordFile(unittest.TestCase):
def setUp(self):
self.os_path_exists = password.os.path.exists
def tearDown(self):
password.os.path.exists = self.os_path_exists
def test_no_password_file(self):
password.os.path.exists = lambda x: False
self.assertEqual(password._read_password_file(b'/nonexistent'), None)
def test_with_password_file(self):
password.os.path.exists = lambda x: True
with patch.object(builtins, 'open', mock_open(read_data=b'Testing\n')) as m:
self.assertEqual(password._read_password_file(b'/etc/motd'), u'Testing')
class TestGenCandidateChars(unittest.TestCase):
def _assert_gen_candidate_chars(self, testcase):
expected_candidate_chars = testcase['candidate_chars']
params = testcase['params']
chars_spec = params['chars']
res = password._gen_candidate_chars(chars_spec)
self.assertEqual(res, expected_candidate_chars)
def test_gen_candidate_chars(self):
for testcase in old_style_params_data:
self._assert_gen_candidate_chars(testcase)
class TestRandomPassword(unittest.TestCase):
def _assert_valid_chars(self, res, chars):
for res_char in res:
self.assertIn(res_char, chars)
def test_default(self):
res = password.random_password()
self.assertEqual(len(res), DEFAULT_LENGTH)
self.assertTrue(isinstance(res, text_type))
self._assert_valid_chars(res, DEFAULT_CANDIDATE_CHARS)
def test_zero_length(self):
res = password.random_password(length=0)
self.assertEqual(len(res), 0)
self.assertTrue(isinstance(res, text_type))
self._assert_valid_chars(res, u',')
def test_just_a_common(self):
res = password.random_password(length=1, chars=u',')
self.assertEqual(len(res), 1)
self.assertEqual(res, u',')
def test_free_will(self):
# A Rush and Spinal Tap reference twofer
res = password.random_password(length=11, chars=u'a')
self.assertEqual(len(res), 11)
self.assertEqual(res, 'aaaaaaaaaaa')
self._assert_valid_chars(res, u'a')
def test_unicode(self):
res = password.random_password(length=11, chars=u'くらとみ')
self._assert_valid_chars(res, u'くらとみ')
self.assertEqual(len(res), 11)
def test_seed(self):
pw1 = password.random_password(seed=1)
pw2 = password.random_password(seed=1)
pw3 = password.random_password(seed=2)
self.assertEqual(pw1, pw2)
self.assertNotEqual(pw1, pw3)
def test_gen_password(self):
for testcase in old_style_params_data:
params = testcase['params']
candidate_chars = testcase['candidate_chars']
params_chars_spec = password._gen_candidate_chars(params['chars'])
password_string = password.random_password(length=params['length'],
chars=params_chars_spec)
self.assertEqual(len(password_string),
params['length'],
msg='generated password=%s has length (%s) instead of expected length (%s)' %
(password_string, len(password_string), params['length']))
for char in password_string:
self.assertIn(char, candidate_chars,
msg='%s not found in %s from chars spect %s' %
(char, candidate_chars, params['chars']))
class TestParseContent(unittest.TestCase):
def test_empty_password_file(self):
plaintext_password, salt = password._parse_content(u'')
self.assertEqual(plaintext_password, u'')
self.assertEqual(salt, None)
def test(self):
expected_content = u'12345678'
file_content = expected_content
plaintext_password, salt = password._parse_content(file_content)
self.assertEqual(plaintext_password, expected_content)
self.assertEqual(salt, None)
def test_with_salt(self):
expected_content = u'12345678 salt=87654321'
file_content = expected_content
plaintext_password, salt = password._parse_content(file_content)
self.assertEqual(plaintext_password, u'12345678')
self.assertEqual(salt, u'87654321')
class TestFormatContent(unittest.TestCase):
def test_no_encrypt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=u'87654321',
encrypt=False),
u'hunter42 salt=87654321')
def test_no_encrypt_no_salt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=None,
encrypt=None),
u'hunter42')
def test_encrypt(self):
self.assertEqual(
password._format_content(password=u'hunter42',
salt=u'87654321',
encrypt='pbkdf2_sha256'),
u'hunter42 salt=87654321')
def test_encrypt_no_salt(self):
self.assertRaises(AssertionError, password._format_content, u'hunter42', None, 'pbkdf2_sha256')
class TestWritePasswordFile(unittest.TestCase):
def setUp(self):
self.makedirs_safe = password.makedirs_safe
self.os_chmod = password.os.chmod
password.makedirs_safe = lambda path, mode: None
password.os.chmod = lambda path, mode: None
def tearDown(self):
password.makedirs_safe = self.makedirs_safe
password.os.chmod = self.os_chmod
def test_content_written(self):
with patch.object(builtins, 'open', mock_open()) as m:
password._write_password_file(b'/this/is/a/test/caf\xc3\xa9', u'Testing Café')
m.assert_called_once_with(b'/this/is/a/test/caf\xc3\xa9', 'wb')
m().write.assert_called_once_with(u'Testing Café\n'.encode('utf-8'))
class BaseTestLookupModule(unittest.TestCase):
def setUp(self):
self.fake_loader = DictDataLoader({'/path/to/somewhere': 'sdfsdf'})
self.password_lookup = lookup_loader.get('password')
self.password_lookup._loader = self.fake_loader
self.os_path_exists = password.os.path.exists
self.os_open = password.os.open
password.os.open = lambda path, flag: None
self.os_close = password.os.close
password.os.close = lambda fd: None
self.os_remove = password.os.remove
password.os.remove = lambda path: None
self.makedirs_safe = password.makedirs_safe
password.makedirs_safe = lambda path, mode: None
def tearDown(self):
password.os.path.exists = self.os_path_exists
password.os.open = self.os_open
password.os.close = self.os_close
password.os.remove = self.os_remove
password.makedirs_safe = self.makedirs_safe
class TestLookupModuleWithoutPasslib(BaseTestLookupModule):
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_no_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere'], None)
# FIXME: assert something useful
for result in results:
assert len(result) == DEFAULT_LENGTH
assert isinstance(result, text_type)
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_password_already_created_no_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
for result in results:
self.assertEqual(result, u'hunter42')
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_only_a(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere chars=a'], None)
for result in results:
self.assertEqual(result, u'a' * DEFAULT_LENGTH)
@patch('time.sleep')
def test_lock_been_held(self, mock_sleep):
# pretend the lock file is here
password.os.path.exists = lambda x: True
try:
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
# should timeout here
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
self.fail("Lookup didn't timeout when lock already been held")
except AnsibleError:
pass
def test_lock_not_been_held(self):
# pretend now there is password file but no lock
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
try:
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
# should not timeout here
results = self.password_lookup.run([u'/path/to/somewhere chars=anything'], None)
except AnsibleError:
self.fail('Lookup timeouts when lock is free')
for result in results:
self.assertEqual(result, u'hunter42')
@pytest.mark.skipif(passlib is None, reason='passlib must be installed to run these tests')
class TestLookupModuleWithPasslib(BaseTestLookupModule):
def setUp(self):
super(TestLookupModuleWithPasslib, self).setUp()
# Different releases of passlib default to a different number of rounds
self.sha256 = passlib.registry.get_crypt_handler('pbkdf2_sha256')
sha256_for_tests = pbkdf2.create_pbkdf2_hash("sha256", 32, 20000)
passlib.registry.register_crypt_handler(sha256_for_tests, force=True)
def tearDown(self):
super(TestLookupModuleWithPasslib, self).tearDown()
passlib.registry.register_crypt_handler(self.sha256, force=True)
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
results = self.password_lookup.run([u'/path/to/somewhere encrypt=pbkdf2_sha256'], None)
# pbkdf2 format plus hash
expected_password_length = 76
for result in results:
self.assertEqual(len(result), expected_password_length)
# result should have 5 parts split by '$'
str_parts = result.split('$', 5)
# verify the result is parseable by the passlib
crypt_parts = passlib.hash.pbkdf2_sha256.parsehash(result)
# verify it used the right algo type
self.assertEqual(str_parts[1], 'pbkdf2-sha256')
self.assertEqual(len(str_parts), 5)
# verify the string and parsehash agree on the number of rounds
self.assertEqual(int(str_parts[2]), crypt_parts['rounds'])
self.assertIsInstance(result, text_type)
@patch.object(PluginLoader, '_get_paths')
@patch('ansible.plugins.lookup.password._write_password_file')
def test_password_already_created_encrypt(self, mock_get_paths, mock_write_file):
mock_get_paths.return_value = ['/path/one', '/path/two', '/path/three']
password.os.path.exists = lambda x: x == to_bytes('/path/to/somewhere')
with patch.object(builtins, 'open', mock_open(read_data=b'hunter42 salt=87654321\n')) as m:
results = self.password_lookup.run([u'/path/to/somewhere chars=anything encrypt=pbkdf2_sha256'], None)
for result in results:
self.assertEqual(result, u'$pbkdf2-sha256$20000$ODc2NTQzMjE$Uikde0cv0BKaRaAXMrUQB.zvG4GmnjClwjghwIRf2gU')
@pytest.mark.skipif(passlib is None, reason='passlib must be installed to run these tests')
class TestLookupModuleWithPasslibWrappedAlgo(BaseTestLookupModule):
def setUp(self):
super(TestLookupModuleWithPasslibWrappedAlgo, self).setUp()
self.os_path_exists = password.os.path.exists
def tearDown(self):
super(TestLookupModuleWithPasslibWrappedAlgo, self).tearDown()
password.os.path.exists = self.os_path_exists
@patch('ansible.plugins.lookup.password._write_password_file')
def test_encrypt_wrapped_crypt_algo(self, mock_write_file):
password.os.path.exists = self.password_lookup._loader.path_exists
with patch.object(builtins, 'open', mock_open(read_data=self.password_lookup._loader._get_file_contents('/path/to/somewhere')[0])) as m:
results = self.password_lookup.run([u'/path/to/somewhere encrypt=ldap_sha256_crypt'], None)
wrapper = getattr(passlib.hash, 'ldap_sha256_crypt')
self.assertEqual(len(results), 1)
result = results[0]
self.assertIsInstance(result, text_type)
expected_password_length = 76
self.assertEqual(len(result), expected_password_length)
# result should have 5 parts split by '$'
str_parts = result.split('$')
self.assertEqual(len(str_parts), 5)
# verify the string and passlib agree on the number of rounds
self.assertEqual(str_parts[2], "rounds=%s" % wrapper.default_rounds)
# verify it used the right algo type
self.assertEqual(str_parts[0], '{CRYPT}')
# verify it used the right algo type
self.assertTrue(wrapper.verify(self.password_lookup._loader._get_file_contents('/path/to/somewhere')[0], result))
# verify a password with a non default rounds value
# generated with: echo test | mkpasswd -s --rounds 660000 -m sha-256 --salt testansiblepass.
hashpw = '{CRYPT}$5$rounds=660000$testansiblepass.$KlRSdA3iFXoPI.dEwh7AixiXW3EtCkLrlQvlYA2sluD'
self.assertTrue(wrapper.verify('test', hashpw))
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,459 |
Unexpected exception when a task is named "meta"
|
### Summary
When a task is named with the word meta (like `- name: "meta"`), the task fails with the following error :
```
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
```
### Issue Type
Bug Report
### Component Name
all
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/utilisateur/ansible/prj_chainage_traitement/ansible.cfg
configured module search path = ['/home/utilisateur/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### OS / Environment
Linux Debian 11.1
### Steps to Reproduce
The following playbook reproduces this issue :
```yaml
- hosts: localhost
gather_facts: no
connection: localhost
tasks:
- name: "meta"
debug:
msg: "abcdefgh"
```
### Expected Results
You should obtain :
```
TASK [meta]
ok: [localhost] => {
"msg": "abcdefgh"
}
```
When name is changed (ex.: `- name: "meta"`), there is no error.
Ansible should generate an error when the user tries to name a task `meta`.
### Actual Results
The following error is returned :
```console
PLAY [localhost] ***********
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
to see the full traceback, use -vvv
```
With verbose mode, a Python exception is returned :
```
PLAY [localhost] *******************
META: ran handlers
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/playbook.py", line 129, in run
results = pbex.run()
File "/usr/lib/python3/dist-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3/dist-packages/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 224, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 97, in _get_next_task_lockstep
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
File "/usr/lib/python3/dist-packages/ansible/executor/play_iterator.py", line 253, in get_next_task_for_host
display.debug(" ^ task is: %s" % task)
File "/usr/lib/python3/dist-packages/ansible/playbook/task.py", line 156, in __repr__
return "TASK: meta (%s)" % self.args['_raw_params']
KeyError: '_raw_params'
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79459
|
https://github.com/ansible/ansible/pull/79464
|
01ff57bdff35e0e97b16caa2c420fe01039d13e1
|
3bda4eae6f1273a42f14b3dedc0d4f5928b290f6
| 2022-11-24T09:36:31Z |
python
| 2022-11-29T15:43:14Z |
changelogs/fragments/79459-fix-meta-task-check.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,459 |
Unexpected exception when a task is named "meta"
|
### Summary
When a task is named with the word meta (like `- name: "meta"`), the task fails with the following error :
```
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
```
### Issue Type
Bug Report
### Component Name
all
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/utilisateur/ansible/prj_chainage_traitement/ansible.cfg
configured module search path = ['/home/utilisateur/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### OS / Environment
Linux Debian 11.1
### Steps to Reproduce
The following playbook reproduces this issue :
```yaml
- hosts: localhost
gather_facts: no
connection: localhost
tasks:
- name: "meta"
debug:
msg: "abcdefgh"
```
### Expected Results
You should obtain :
```
TASK [meta]
ok: [localhost] => {
"msg": "abcdefgh"
}
```
When name is changed (ex.: `- name: "meta"`), there is no error.
Ansible should generate an error when the user tries to name a task `meta`.
### Actual Results
The following error is returned :
```console
PLAY [localhost] ***********
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
to see the full traceback, use -vvv
```
With verbose mode, a Python exception is returned :
```
PLAY [localhost] *******************
META: ran handlers
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/playbook.py", line 129, in run
results = pbex.run()
File "/usr/lib/python3/dist-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3/dist-packages/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 224, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 97, in _get_next_task_lockstep
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
File "/usr/lib/python3/dist-packages/ansible/executor/play_iterator.py", line 253, in get_next_task_for_host
display.debug(" ^ task is: %s" % task)
File "/usr/lib/python3/dist-packages/ansible/playbook/task.py", line 156, in __repr__
return "TASK: meta (%s)" % self.args['_raw_params']
KeyError: '_raw_params'
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79459
|
https://github.com/ansible/ansible/pull/79464
|
01ff57bdff35e0e97b16caa2c420fe01039d13e1
|
3bda4eae6f1273a42f14b3dedc0d4f5928b290f6
| 2022-11-24T09:36:31Z |
python
| 2022-11-29T15:43:14Z |
lib/ansible/playbook/task.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleAssertionError
from ansible.module_utils._text import to_native
from ansible.module_utils.six import string_types
from ansible.parsing.mod_args import ModuleArgsParser
from ansible.parsing.yaml.objects import AnsibleBaseYAMLObject, AnsibleMapping
from ansible.plugins.loader import lookup_loader
from ansible.playbook.attribute import FieldAttribute, NonInheritableFieldAttribute
from ansible.playbook.base import Base
from ansible.playbook.block import Block
from ansible.playbook.collectionsearch import CollectionSearch
from ansible.playbook.conditional import Conditional
from ansible.playbook.loop_control import LoopControl
from ansible.playbook.role import Role
from ansible.playbook.taggable import Taggable
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.sentinel import Sentinel
__all__ = ['Task']
display = Display()
class Task(Base, Conditional, Taggable, CollectionSearch):
"""
A task is a language feature that represents a call to a module, with given arguments and other parameters.
A handler is a subclass of a task.
Usage:
Task.load(datastructure) -> Task
Task.something(...)
"""
# =================================================================================
# ATTRIBUTES
# load_<attribute_name> and
# validate_<attribute_name>
# will be used if defined
# might be possible to define others
# NOTE: ONLY set defaults on task attributes that are not inheritable,
# inheritance is only triggered if the 'current value' is Sentinel,
# default can be set at play/top level object and inheritance will take it's course.
args = FieldAttribute(isa='dict', default=dict)
action = FieldAttribute(isa='string')
async_val = FieldAttribute(isa='int', default=0, alias='async')
changed_when = FieldAttribute(isa='list', default=list)
delay = FieldAttribute(isa='int', default=5)
delegate_to = FieldAttribute(isa='string')
delegate_facts = FieldAttribute(isa='bool')
failed_when = FieldAttribute(isa='list', default=list)
loop = FieldAttribute()
loop_control = NonInheritableFieldAttribute(isa='class', class_type=LoopControl, default=LoopControl)
notify = FieldAttribute(isa='list')
poll = FieldAttribute(isa='int', default=C.DEFAULT_POLL_INTERVAL)
register = FieldAttribute(isa='string', static=True)
retries = FieldAttribute(isa='int', default=3)
until = FieldAttribute(isa='list', default=list)
# deprecated, used to be loop and loop_args but loop has been repurposed
loop_with = NonInheritableFieldAttribute(isa='string', private=True)
def __init__(self, block=None, role=None, task_include=None):
''' constructors a task, without the Task.load classmethod, it will be pretty blank '''
self._role = role
self._parent = None
self.implicit = False
self.resolved_action = None
if task_include:
self._parent = task_include
else:
self._parent = block
super(Task, self).__init__()
def get_name(self, include_role_fqcn=True):
''' return the name of the task '''
if self._role:
role_name = self._role.get_name(include_role_fqcn=include_role_fqcn)
if self._role and self.name:
return "%s : %s" % (role_name, self.name)
elif self.name:
return self.name
else:
if self._role:
return "%s : %s" % (role_name, self.action)
else:
return "%s" % (self.action,)
def _merge_kv(self, ds):
if ds is None:
return ""
elif isinstance(ds, string_types):
return ds
elif isinstance(ds, dict):
buf = ""
for (k, v) in ds.items():
if k.startswith('_'):
continue
buf = buf + "%s=%s " % (k, v)
buf = buf.strip()
return buf
@staticmethod
def load(data, block=None, role=None, task_include=None, variable_manager=None, loader=None):
t = Task(block=block, role=role, task_include=task_include)
return t.load_data(data, variable_manager=variable_manager, loader=loader)
def __repr__(self):
''' returns a human readable representation of the task '''
if self.get_name() in C._ACTION_META:
return "TASK: meta (%s)" % self.args['_raw_params']
else:
return "TASK: %s" % self.get_name()
def _preprocess_with_loop(self, ds, new_ds, k, v):
''' take a lookup plugin name and store it correctly '''
loop_name = k.removeprefix("with_")
if new_ds.get('loop') is not None or new_ds.get('loop_with') is not None:
raise AnsibleError("duplicate loop in task: %s" % loop_name, obj=ds)
if v is None:
raise AnsibleError("you must specify a value when using %s" % k, obj=ds)
new_ds['loop_with'] = loop_name
new_ds['loop'] = v
# display.deprecated("with_ type loops are being phased out, use the 'loop' keyword instead",
# version="2.10", collection_name='ansible.builtin')
def preprocess_data(self, ds):
'''
tasks are especially complex arguments so need pre-processing.
keep it short.
'''
if not isinstance(ds, dict):
raise AnsibleAssertionError('ds (%s) should be a dict but was a %s' % (ds, type(ds)))
# the new, cleaned datastructure, which will have legacy
# items reduced to a standard structure suitable for the
# attributes of the task class
new_ds = AnsibleMapping()
if isinstance(ds, AnsibleBaseYAMLObject):
new_ds.ansible_pos = ds.ansible_pos
# since this affects the task action parsing, we have to resolve in preprocess instead of in typical validator
default_collection = AnsibleCollectionConfig.default_collection
collections_list = ds.get('collections')
if collections_list is None:
# use the parent value if our ds doesn't define it
collections_list = self.collections
else:
# Validate this untemplated field early on to guarantee we are dealing with a list.
# This is also done in CollectionSearch._load_collections() but this runs before that call.
collections_list = self.get_validated_value('collections', self.fattributes.get('collections'), collections_list, None)
if default_collection and not self._role: # FIXME: and not a collections role
if collections_list:
if default_collection not in collections_list:
collections_list.insert(0, default_collection)
else:
collections_list = [default_collection]
if collections_list and 'ansible.builtin' not in collections_list and 'ansible.legacy' not in collections_list:
collections_list.append('ansible.legacy')
if collections_list:
ds['collections'] = collections_list
# use the args parsing class to determine the action, args,
# and the delegate_to value from the various possible forms
# supported as legacy
args_parser = ModuleArgsParser(task_ds=ds, collection_list=collections_list)
try:
(action, args, delegate_to) = args_parser.parse()
except AnsibleParserError as e:
# if the raises exception was created with obj=ds args, then it includes the detail
# so we dont need to add it so we can just re raise.
if e.obj:
raise
# But if it wasn't, we can add the yaml object now to get more detail
raise AnsibleParserError(to_native(e), obj=ds, orig_exc=e)
else:
self.resolved_action = args_parser.resolved_action
# the command/shell/script modules used to support the `cmd` arg,
# which corresponds to what we now call _raw_params, so move that
# value over to _raw_params (assuming it is empty)
if action in C._ACTION_HAS_CMD:
if 'cmd' in args:
if args.get('_raw_params', '') != '':
raise AnsibleError("The 'cmd' argument cannot be used when other raw parameters are specified."
" Please put everything in one or the other place.", obj=ds)
args['_raw_params'] = args.pop('cmd')
new_ds['action'] = action
new_ds['args'] = args
new_ds['delegate_to'] = delegate_to
# we handle any 'vars' specified in the ds here, as we may
# be adding things to them below (special handling for includes).
# When that deprecated feature is removed, this can be too.
if 'vars' in ds:
# _load_vars is defined in Base, and is used to load a dictionary
# or list of dictionaries in a standard way
new_ds['vars'] = self._load_vars(None, ds.get('vars'))
else:
new_ds['vars'] = dict()
for (k, v) in ds.items():
if k in ('action', 'local_action', 'args', 'delegate_to') or k == action or k == 'shell':
# we don't want to re-assign these values, which were determined by the ModuleArgsParser() above
continue
elif k.startswith('with_') and k.removeprefix("with_") in lookup_loader:
# transform into loop property
self._preprocess_with_loop(ds, new_ds, k, v)
elif C.INVALID_TASK_ATTRIBUTE_FAILED or k in self.fattributes:
new_ds[k] = v
else:
display.warning("Ignoring invalid attribute: %s" % k)
return super(Task, self).preprocess_data(new_ds)
def _load_loop_control(self, attr, ds):
if not isinstance(ds, dict):
raise AnsibleParserError(
"the `loop_control` value must be specified as a dictionary and cannot "
"be a variable itself (though it can contain variables)",
obj=ds,
)
return LoopControl.load(data=ds, variable_manager=self._variable_manager, loader=self._loader)
def _validate_attributes(self, ds):
try:
super(Task, self)._validate_attributes(ds)
except AnsibleParserError as e:
e.message += '\nThis error can be suppressed as a warning using the "invalid_task_attribute_failed" configuration'
raise e
def _validate_changed_when(self, attr, name, value):
if not isinstance(value, list):
setattr(self, name, [value])
def _validate_failed_when(self, attr, name, value):
if not isinstance(value, list):
setattr(self, name, [value])
def post_validate(self, templar):
'''
Override of base class post_validate, to also do final validation on
the block and task include (if any) to which this task belongs.
'''
if self._parent:
self._parent.post_validate(templar)
if AnsibleCollectionConfig.default_collection:
pass
super(Task, self).post_validate(templar)
def _post_validate_loop(self, attr, value, templar):
'''
Override post validation for the loop field, which is templated
specially in the TaskExecutor class when evaluating loops.
'''
return value
def _post_validate_environment(self, attr, value, templar):
'''
Override post validation of vars on the play, as we don't want to
template these too early.
'''
env = {}
if value is not None:
def _parse_env_kv(k, v):
try:
env[k] = templar.template(v, convert_bare=False)
except AnsibleUndefinedVariable as e:
error = to_native(e)
if self.action in C._ACTION_FACT_GATHERING and 'ansible_facts.env' in error or 'ansible_env' in error:
# ignore as fact gathering is required for 'env' facts
return
raise
if isinstance(value, list):
for env_item in value:
if isinstance(env_item, dict):
for k in env_item:
_parse_env_kv(k, env_item[k])
else:
isdict = templar.template(env_item, convert_bare=False)
if isinstance(isdict, dict):
env |= isdict
else:
display.warning("could not parse environment value, skipping: %s" % value)
elif isinstance(value, dict):
# should not really happen
env = dict()
for env_item in value:
_parse_env_kv(env_item, value[env_item])
else:
# at this point it should be a simple string, also should not happen
env = templar.template(value, convert_bare=False)
return env
def _post_validate_changed_when(self, attr, value, templar):
'''
changed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_failed_when(self, attr, value, templar):
'''
failed_when is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def _post_validate_until(self, attr, value, templar):
'''
until is evaluated after the execution of the task is complete,
and should not be templated during the regular post_validate step.
'''
return value
def get_vars(self):
all_vars = dict()
if self._parent:
all_vars |= self._parent.get_vars()
all_vars |= self.vars
if 'tags' in all_vars:
del all_vars['tags']
if 'when' in all_vars:
del all_vars['when']
return all_vars
def get_include_params(self):
all_vars = dict()
if self._parent:
all_vars |= self._parent.get_include_params()
if self.action in C._ACTION_ALL_INCLUDES:
all_vars |= self.vars
return all_vars
def copy(self, exclude_parent=False, exclude_tasks=False):
new_me = super(Task, self).copy()
new_me._parent = None
if self._parent and not exclude_parent:
new_me._parent = self._parent.copy(exclude_tasks=exclude_tasks)
new_me._role = None
if self._role:
new_me._role = self._role
new_me.implicit = self.implicit
new_me.resolved_action = self.resolved_action
new_me._uuid = self._uuid
return new_me
def serialize(self):
data = super(Task, self).serialize()
if not self._squashed and not self._finalized:
if self._parent:
data['parent'] = self._parent.serialize()
data['parent_type'] = self._parent.__class__.__name__
if self._role:
data['role'] = self._role.serialize()
data['implicit'] = self.implicit
data['resolved_action'] = self.resolved_action
return data
def deserialize(self, data):
# import is here to avoid import loops
from ansible.playbook.task_include import TaskInclude
from ansible.playbook.handler_task_include import HandlerTaskInclude
parent_data = data.get('parent', None)
if parent_data:
parent_type = data.get('parent_type')
if parent_type == 'Block':
p = Block()
elif parent_type == 'TaskInclude':
p = TaskInclude()
elif parent_type == 'HandlerTaskInclude':
p = HandlerTaskInclude()
p.deserialize(parent_data)
self._parent = p
del data['parent']
role_data = data.get('role')
if role_data:
r = Role()
r.deserialize(role_data)
self._role = r
del data['role']
self.implicit = data.get('implicit', False)
self.resolved_action = data.get('resolved_action')
super(Task, self).deserialize(data)
def set_loader(self, loader):
'''
Sets the loader on this object and recursively on parent, child objects.
This is used primarily after the Task has been serialized/deserialized, which
does not preserve the loader.
'''
self._loader = loader
if self._parent:
self._parent.set_loader(loader)
def _get_parent_attribute(self, attr, omit=False):
'''
Generic logic to get the attribute or parent attribute for a task value.
'''
fattr = self.fattributes[attr]
extend = fattr.extend
prepend = fattr.prepend
try:
# omit self, and only get parent values
if omit:
value = Sentinel
else:
value = getattr(self, f'_{attr}', Sentinel)
# If parent is static, we can grab attrs from the parent
# otherwise, defer to the grandparent
if getattr(self._parent, 'statically_loaded', True):
_parent = self._parent
else:
_parent = self._parent._parent
if _parent and (value is Sentinel or extend):
if getattr(_parent, 'statically_loaded', True):
# vars are always inheritable, other attributes might not be for the parent but still should be for other ancestors
if attr != 'vars' and hasattr(_parent, '_get_parent_attribute'):
parent_value = _parent._get_parent_attribute(attr)
else:
parent_value = getattr(_parent, f'_{attr}', Sentinel)
if extend:
value = self._extend_value(value, parent_value, prepend)
else:
value = parent_value
except KeyError:
pass
return value
def all_parents_static(self):
if self._parent:
return self._parent.all_parents_static()
return True
def get_first_parent_include(self):
from ansible.playbook.task_include import TaskInclude
if self._parent:
if isinstance(self._parent, TaskInclude):
return self._parent
return self._parent.get_first_parent_include()
return None
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,459 |
Unexpected exception when a task is named "meta"
|
### Summary
When a task is named with the word meta (like `- name: "meta"`), the task fails with the following error :
```
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
```
### Issue Type
Bug Report
### Component Name
all
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/utilisateur/ansible/prj_chainage_traitement/ansible.cfg
configured module search path = ['/home/utilisateur/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### OS / Environment
Linux Debian 11.1
### Steps to Reproduce
The following playbook reproduces this issue :
```yaml
- hosts: localhost
gather_facts: no
connection: localhost
tasks:
- name: "meta"
debug:
msg: "abcdefgh"
```
### Expected Results
You should obtain :
```
TASK [meta]
ok: [localhost] => {
"msg": "abcdefgh"
}
```
When name is changed (ex.: `- name: "meta"`), there is no error.
Ansible should generate an error when the user tries to name a task `meta`.
### Actual Results
The following error is returned :
```console
PLAY [localhost] ***********
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
to see the full traceback, use -vvv
```
With verbose mode, a Python exception is returned :
```
PLAY [localhost] *******************
META: ran handlers
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/playbook.py", line 129, in run
results = pbex.run()
File "/usr/lib/python3/dist-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3/dist-packages/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 224, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 97, in _get_next_task_lockstep
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
File "/usr/lib/python3/dist-packages/ansible/executor/play_iterator.py", line 253, in get_next_task_for_host
display.debug(" ^ task is: %s" % task)
File "/usr/lib/python3/dist-packages/ansible/playbook/task.py", line 156, in __repr__
return "TASK: meta (%s)" % self.args['_raw_params']
KeyError: '_raw_params'
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79459
|
https://github.com/ansible/ansible/pull/79464
|
01ff57bdff35e0e97b16caa2c420fe01039d13e1
|
3bda4eae6f1273a42f14b3dedc0d4f5928b290f6
| 2022-11-24T09:36:31Z |
python
| 2022-11-29T15:43:14Z |
test/integration/targets/tasks/playbook.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,459 |
Unexpected exception when a task is named "meta"
|
### Summary
When a task is named with the word meta (like `- name: "meta"`), the task fails with the following error :
```
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
```
### Issue Type
Bug Report
### Component Name
all
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/utilisateur/ansible/prj_chainage_traitement/ansible.cfg
configured module search path = ['/home/utilisateur/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### OS / Environment
Linux Debian 11.1
### Steps to Reproduce
The following playbook reproduces this issue :
```yaml
- hosts: localhost
gather_facts: no
connection: localhost
tasks:
- name: "meta"
debug:
msg: "abcdefgh"
```
### Expected Results
You should obtain :
```
TASK [meta]
ok: [localhost] => {
"msg": "abcdefgh"
}
```
When name is changed (ex.: `- name: "meta"`), there is no error.
Ansible should generate an error when the user tries to name a task `meta`.
### Actual Results
The following error is returned :
```console
PLAY [localhost] ***********
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
to see the full traceback, use -vvv
```
With verbose mode, a Python exception is returned :
```
PLAY [localhost] *******************
META: ran handlers
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/playbook.py", line 129, in run
results = pbex.run()
File "/usr/lib/python3/dist-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3/dist-packages/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 224, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 97, in _get_next_task_lockstep
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
File "/usr/lib/python3/dist-packages/ansible/executor/play_iterator.py", line 253, in get_next_task_for_host
display.debug(" ^ task is: %s" % task)
File "/usr/lib/python3/dist-packages/ansible/playbook/task.py", line 156, in __repr__
return "TASK: meta (%s)" % self.args['_raw_params']
KeyError: '_raw_params'
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79459
|
https://github.com/ansible/ansible/pull/79464
|
01ff57bdff35e0e97b16caa2c420fe01039d13e1
|
3bda4eae6f1273a42f14b3dedc0d4f5928b290f6
| 2022-11-24T09:36:31Z |
python
| 2022-11-29T15:43:14Z |
test/integration/targets/tasks/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,459 |
Unexpected exception when a task is named "meta"
|
### Summary
When a task is named with the word meta (like `- name: "meta"`), the task fails with the following error :
```
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
```
### Issue Type
Bug Report
### Component Name
all
### Ansible Version
```console
$ ansible --version
ansible 2.10.8
config file = /home/utilisateur/ansible/prj_chainage_traitement/ansible.cfg
configured module search path = ['/home/utilisateur/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
```
### OS / Environment
Linux Debian 11.1
### Steps to Reproduce
The following playbook reproduces this issue :
```yaml
- hosts: localhost
gather_facts: no
connection: localhost
tasks:
- name: "meta"
debug:
msg: "abcdefgh"
```
### Expected Results
You should obtain :
```
TASK [meta]
ok: [localhost] => {
"msg": "abcdefgh"
}
```
When name is changed (ex.: `- name: "meta"`), there is no error.
Ansible should generate an error when the user tries to name a task `meta`.
### Actual Results
The following error is returned :
```console
PLAY [localhost] ***********
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
to see the full traceback, use -vvv
```
With verbose mode, a Python exception is returned :
```
PLAY [localhost] *******************
META: ran handlers
ERROR! Unexpected Exception, this is probably a bug: '_raw_params'
the full traceback was:
Traceback (most recent call last):
File "/usr/bin/ansible-playbook", line 123, in <module>
exit_code = cli.run()
File "/usr/lib/python3/dist-packages/ansible/cli/playbook.py", line 129, in run
results = pbex.run()
File "/usr/lib/python3/dist-packages/ansible/executor/playbook_executor.py", line 169, in run
result = self._tqm.run(play=play)
File "/usr/lib/python3/dist-packages/ansible/executor/task_queue_manager.py", line 281, in run
play_return = strategy.run(iterator, play_context)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 224, in run
host_tasks = self._get_next_task_lockstep(hosts_left, iterator)
File "/usr/lib/python3/dist-packages/ansible/plugins/strategy/linear.py", line 97, in _get_next_task_lockstep
host_tasks[host.name] = iterator.get_next_task_for_host(host, peek=True)
File "/usr/lib/python3/dist-packages/ansible/executor/play_iterator.py", line 253, in get_next_task_for_host
display.debug(" ^ task is: %s" % task)
File "/usr/lib/python3/dist-packages/ansible/playbook/task.py", line 156, in __repr__
return "TASK: meta (%s)" % self.args['_raw_params']
KeyError: '_raw_params'
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79459
|
https://github.com/ansible/ansible/pull/79464
|
01ff57bdff35e0e97b16caa2c420fe01039d13e1
|
3bda4eae6f1273a42f14b3dedc0d4f5928b290f6
| 2022-11-24T09:36:31Z |
python
| 2022-11-29T15:43:14Z |
test/integration/targets/tasks/tasks/main.yml
|
# make sure tasks with an undefined variable in the name are gracefully handled
- name: "Task name with undefined variable: {{ not_defined }}"
debug:
msg: Hello
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,047 |
ansible-inventory ignores inventory order and returns hosts sorted by name
|
##### SUMMARY
_ansible-inventory_ ignores inventory order and returns hosts sorted by name
_ansible-inventory_ is very convinient to quickly have a look on the inventory, it is very handy when reviewing complex inventories based on many inventory files, scripts and group_vars.
It matters when chronological hosts order (sort by creation time) is more important then names and for projects using group position in patterns, example: groups['servers'][0] as often used for services like grafana and prometheus
I'd like to use the ansible-inventory command to quickly review inventories and ocassionally use the output for executing predictable tests.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.9.16
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.5.0
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
ansible 2.5.0
python version = 2.7.12 (default, Oct 5 2020, 13:56:01) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
ansible --list-hosts all -i host02,host03,host01
```
hosts (3):
host02
host03
host01
```
ansible-inventory --list all -i host02,host03,host01
```
{
"_meta": {
"hostvars": {
"host01": {},
"host02": {},
"host03": {}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"host01",
"host02",
"host03"
]
}
}
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
_ansible-inventory_ shows Ansible inventory information, the order provided in the inventory (default).
[Ordering execution based on inventory](https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html#ordering-execution-based-on-inventory)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
_ansible-inventory_ command ignores inventory order and returns ordered hosts
|
https://github.com/ansible/ansible/issues/73047
|
https://github.com/ansible/ansible/pull/74839
|
1998521e2d5b89bc53d00639bad178330ebb98df
|
5b51b560d0328e35dad5d4c77688f7577081c0ed
| 2020-12-22T12:32:27Z |
python
| 2022-11-30T14:25:34Z |
changelogs/fragments/unsorted.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,047 |
ansible-inventory ignores inventory order and returns hosts sorted by name
|
##### SUMMARY
_ansible-inventory_ ignores inventory order and returns hosts sorted by name
_ansible-inventory_ is very convinient to quickly have a look on the inventory, it is very handy when reviewing complex inventories based on many inventory files, scripts and group_vars.
It matters when chronological hosts order (sort by creation time) is more important then names and for projects using group position in patterns, example: groups['servers'][0] as often used for services like grafana and prometheus
I'd like to use the ansible-inventory command to quickly review inventories and ocassionally use the output for executing predictable tests.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.9.16
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.5.0
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
ansible 2.5.0
python version = 2.7.12 (default, Oct 5 2020, 13:56:01) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
ansible --list-hosts all -i host02,host03,host01
```
hosts (3):
host02
host03
host01
```
ansible-inventory --list all -i host02,host03,host01
```
{
"_meta": {
"hostvars": {
"host01": {},
"host02": {},
"host03": {}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"host01",
"host02",
"host03"
]
}
}
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
_ansible-inventory_ shows Ansible inventory information, the order provided in the inventory (default).
[Ordering execution based on inventory](https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html#ordering-execution-based-on-inventory)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
_ansible-inventory_ command ignores inventory order and returns ordered hosts
|
https://github.com/ansible/ansible/issues/73047
|
https://github.com/ansible/ansible/pull/74839
|
1998521e2d5b89bc53d00639bad178330ebb98df
|
5b51b560d0328e35dad5d4c77688f7577081c0ed
| 2020-12-22T12:32:27Z |
python
| 2022-11-30T14:25:34Z |
lib/ansible/cli/inventory.py
|
#!/usr/bin/env python
# Copyright: (c) 2017, Brian Coca <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import sys
import argparse
from operator import attrgetter
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.utils.vars import combine_vars
from ansible.utils.display import Display
from ansible.vars.plugins import get_vars_from_inventory_sources, get_vars_from_path
display = Display()
INTERNAL_VARS = frozenset(['ansible_diff_mode',
'ansible_config_file',
'ansible_facts',
'ansible_forks',
'ansible_inventory_sources',
'ansible_limit',
'ansible_playbook_python',
'ansible_run_tags',
'ansible_skip_tags',
'ansible_verbosity',
'ansible_version',
'inventory_dir',
'inventory_file',
'inventory_hostname',
'inventory_hostname_short',
'groups',
'group_names',
'omit',
'playbook_dir', ])
class InventoryCLI(CLI):
''' used to display or dump the configured inventory as Ansible sees it '''
name = 'ansible-inventory'
ARGUMENTS = {'host': 'The name of a host to match in the inventory, relevant when using --list',
'group': 'The name of a group in the inventory, relevant when using --graph', }
def __init__(self, args):
super(InventoryCLI, self).__init__(args)
self.vm = None
self.loader = None
self.inventory = None
def init_parser(self):
super(InventoryCLI, self).init_parser(
usage='usage: %prog [options] [host|group]',
epilog='Show Ansible inventory information, by default it uses the inventory script JSON format')
opt_help.add_inventory_options(self.parser)
opt_help.add_vault_options(self.parser)
opt_help.add_basedir_options(self.parser)
opt_help.add_runtask_options(self.parser)
# remove unused default options
self.parser.add_argument('-l', '--limit', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument, nargs='?')
self.parser.add_argument('--list-hosts', help=argparse.SUPPRESS, action=opt_help.UnrecognizedArgument)
self.parser.add_argument('args', metavar='host|group', nargs='?')
# Actions
action_group = self.parser.add_argument_group("Actions", "One of following must be used on invocation, ONLY ONE!")
action_group.add_argument("--list", action="store_true", default=False, dest='list', help='Output all hosts info, works as inventory script')
action_group.add_argument("--host", action="store", default=None, dest='host', help='Output specific host info, works as inventory script')
action_group.add_argument("--graph", action="store_true", default=False, dest='graph',
help='create inventory graph, if supplying pattern it must be a valid group name')
self.parser.add_argument_group(action_group)
# graph
self.parser.add_argument("-y", "--yaml", action="store_true", default=False, dest='yaml',
help='Use YAML format instead of default JSON, ignored for --graph')
self.parser.add_argument('--toml', action='store_true', default=False, dest='toml',
help='Use TOML format instead of default JSON, ignored for --graph')
self.parser.add_argument("--vars", action="store_true", default=False, dest='show_vars',
help='Add vars to graph display, ignored unless used with --graph')
# list
self.parser.add_argument("--export", action="store_true", default=C.INVENTORY_EXPORT, dest='export',
help="When doing an --list, represent in a way that is optimized for export,"
"not as an accurate representation of how Ansible has processed it")
self.parser.add_argument('--output', default=None, dest='output_file',
help="When doing --list, send the inventory to a file instead of to the screen")
# self.parser.add_argument("--ignore-vars-plugins", action="store_true", default=False, dest='ignore_vars_plugins',
# help="When doing an --list, skip vars data from vars plugins, by default, this would include group_vars/ and host_vars/")
def post_process_args(self, options):
options = super(InventoryCLI, self).post_process_args(options)
display.verbosity = options.verbosity
self.validate_conflicts(options)
# there can be only one! and, at least, one!
used = 0
for opt in (options.list, options.host, options.graph):
if opt:
used += 1
if used == 0:
raise AnsibleOptionsError("No action selected, at least one of --host, --graph or --list needs to be specified.")
elif used > 1:
raise AnsibleOptionsError("Conflicting options used, only one of --host, --graph or --list can be used at the same time.")
# set host pattern to default if not supplied
if options.args:
options.pattern = options.args
else:
options.pattern = 'all'
return options
def run(self):
super(InventoryCLI, self).run()
# Initialize needed objects
self.loader, self.inventory, self.vm = self._play_prereqs()
results = None
if context.CLIARGS['host']:
hosts = self.inventory.get_hosts(context.CLIARGS['host'])
if len(hosts) != 1:
raise AnsibleOptionsError("You must pass a single valid host to --host parameter")
myvars = self._get_host_variables(host=hosts[0])
# FIXME: should we template first?
results = self.dump(myvars)
elif context.CLIARGS['graph']:
results = self.inventory_graph()
elif context.CLIARGS['list']:
top = self._get_group('all')
if context.CLIARGS['yaml']:
results = self.yaml_inventory(top)
elif context.CLIARGS['toml']:
results = self.toml_inventory(top)
else:
results = self.json_inventory(top)
results = self.dump(results)
if results:
outfile = context.CLIARGS['output_file']
if outfile is None:
# FIXME: pager?
display.display(results)
else:
try:
with open(to_bytes(outfile), 'wb') as f:
f.write(to_bytes(results))
except (OSError, IOError) as e:
raise AnsibleError('Unable to write to destination file (%s): %s' % (to_native(outfile), to_native(e)))
sys.exit(0)
sys.exit(1)
@staticmethod
def dump(stuff):
if context.CLIARGS['yaml']:
import yaml
from ansible.parsing.yaml.dumper import AnsibleDumper
results = to_text(yaml.dump(stuff, Dumper=AnsibleDumper, default_flow_style=False, allow_unicode=True))
elif context.CLIARGS['toml']:
from ansible.plugins.inventory.toml import toml_dumps
try:
results = toml_dumps(stuff)
except TypeError as e:
raise AnsibleError(
'The source inventory contains a value that cannot be represented in TOML: %s' % e
)
except KeyError as e:
raise AnsibleError(
'The source inventory contains a non-string key (%s) which cannot be represented in TOML. '
'The specified key will need to be converted to a string. Be aware that if your playbooks '
'expect this key to be non-string, your playbooks will need to be modified to support this '
'change.' % e.args[0]
)
else:
import json
from ansible.parsing.ajson import AnsibleJSONEncoder
try:
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=True, indent=4, preprocess_unsafe=True, ensure_ascii=False)
except TypeError as e:
results = json.dumps(stuff, cls=AnsibleJSONEncoder, sort_keys=False, indent=4, preprocess_unsafe=True, ensure_ascii=False)
display.warning("Could not sort JSON output due to issues while sorting keys: %s" % to_native(e))
return results
def _get_group_variables(self, group):
# get info from inventory source
res = group.get_vars()
# Always load vars plugins
res = combine_vars(res, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [group], 'all'))
if context.CLIARGS['basedir']:
res = combine_vars(res, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [group], 'all'))
if group.priority != 1:
res['ansible_group_priority'] = group.priority
return self._remove_internal(res)
def _get_host_variables(self, host):
if context.CLIARGS['export']:
# only get vars defined directly host
hostvars = host.get_vars()
# Always load vars plugins
hostvars = combine_vars(hostvars, get_vars_from_inventory_sources(self.loader, self.inventory._sources, [host], 'all'))
if context.CLIARGS['basedir']:
hostvars = combine_vars(hostvars, get_vars_from_path(self.loader, context.CLIARGS['basedir'], [host], 'all'))
else:
# get all vars flattened by host, but skip magic hostvars
hostvars = self.vm.get_vars(host=host, include_hostvars=False, stage='all')
return self._remove_internal(hostvars)
def _get_group(self, gname):
group = self.inventory.groups.get(gname)
return group
@staticmethod
def _remove_internal(dump):
for internal in INTERNAL_VARS:
if internal in dump:
del dump[internal]
return dump
@staticmethod
def _remove_empty(dump):
# remove empty keys
for x in ('hosts', 'vars', 'children'):
if x in dump and not dump[x]:
del dump[x]
@staticmethod
def _show_vars(dump, depth):
result = []
for (name, val) in sorted(dump.items()):
result.append(InventoryCLI._graph_name('{%s = %s}' % (name, val), depth))
return result
@staticmethod
def _graph_name(name, depth=0):
if depth:
name = " |" * (depth) + "--%s" % name
return name
def _graph_group(self, group, depth=0):
result = [self._graph_name('@%s:' % group.name, depth)]
depth = depth + 1
for kid in sorted(group.child_groups, key=attrgetter('name')):
result.extend(self._graph_group(kid, depth))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
result.append(self._graph_name(host.name, depth))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_host_variables(host), depth + 1))
if context.CLIARGS['show_vars']:
result.extend(self._show_vars(self._get_group_variables(group), depth))
return result
def inventory_graph(self):
start_at = self._get_group(context.CLIARGS['pattern'])
if start_at:
return '\n'.join(self._graph_group(start_at))
else:
raise AnsibleOptionsError("Pattern must be valid group name when using --graph")
def json_inventory(self, top):
seen = set()
def format_group(group):
results = {}
results[group.name] = {}
if group.name != 'all':
results[group.name]['hosts'] = [h.name for h in sorted(group.hosts, key=attrgetter('name'))]
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
results[group.name]['children'].append(subgroup.name)
if subgroup.name not in seen:
results.update(format_group(subgroup))
seen.add(subgroup.name)
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
# populate meta
results['_meta'] = {'hostvars': {}}
hosts = self.inventory.get_hosts()
for host in hosts:
hvars = self._get_host_variables(host)
if hvars:
results['_meta']['hostvars'][host.name] = hvars
return results
def yaml_inventory(self, top):
seen = []
def format_group(group):
results = {}
# initialize group + vars
results[group.name] = {}
# subgroups
results[group.name]['children'] = {}
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name != 'all':
results[group.name]['children'].update(format_group(subgroup))
# hosts for group
results[group.name]['hosts'] = {}
if group.name != 'all':
for h in sorted(group.hosts, key=attrgetter('name')):
myvars = {}
if h.name not in seen: # avoid defining host vars more than once
seen.append(h.name)
myvars = self._get_host_variables(host=h)
results[group.name]['hosts'][h.name] = myvars
if context.CLIARGS['export']:
gvars = self._get_group_variables(group)
if gvars:
results[group.name]['vars'] = gvars
self._remove_empty(results[group.name])
return results
return format_group(top)
def toml_inventory(self, top):
seen = set()
has_ungrouped = bool(next(g.hosts for g in top.child_groups if g.name == 'ungrouped'))
def format_group(group):
results = {}
results[group.name] = {}
results[group.name]['children'] = []
for subgroup in sorted(group.child_groups, key=attrgetter('name')):
if subgroup.name == 'ungrouped' and not has_ungrouped:
continue
if group.name != 'all':
results[group.name]['children'].append(subgroup.name)
results.update(format_group(subgroup))
if group.name != 'all':
for host in sorted(group.hosts, key=attrgetter('name')):
if host.name not in seen:
seen.add(host.name)
host_vars = self._get_host_variables(host=host)
else:
host_vars = {}
try:
results[group.name]['hosts'][host.name] = host_vars
except KeyError:
results[group.name]['hosts'] = {host.name: host_vars}
if context.CLIARGS['export']:
results[group.name]['vars'] = self._get_group_variables(group)
self._remove_empty(results[group.name])
if not results[group.name]:
del results[group.name]
return results
results = format_group(top)
return results
def main(args=None):
InventoryCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 73,047 |
ansible-inventory ignores inventory order and returns hosts sorted by name
|
##### SUMMARY
_ansible-inventory_ ignores inventory order and returns hosts sorted by name
_ansible-inventory_ is very convinient to quickly have a look on the inventory, it is very handy when reviewing complex inventories based on many inventory files, scripts and group_vars.
It matters when chronological hosts order (sort by creation time) is more important then names and for projects using group position in patterns, example: groups['servers'][0] as often used for services like grafana and prometheus
I'd like to use the ansible-inventory command to quickly review inventories and ocassionally use the output for executing predictable tests.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
ansible-inventory
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.10.4
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.9.16
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
```
```paste below
ansible 2.5.0
python version = 3.8.5 (default, Jul 28 2020, 12:59:40) [GCC 9.3.0]
ansible 2.5.0
python version = 2.7.12 (default, Oct 5 2020, 13:56:01) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Linux
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
ansible --list-hosts all -i host02,host03,host01
```
hosts (3):
host02
host03
host01
```
ansible-inventory --list all -i host02,host03,host01
```
{
"_meta": {
"hostvars": {
"host01": {},
"host02": {},
"host03": {}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"host01",
"host02",
"host03"
]
}
}
```
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
_ansible-inventory_ shows Ansible inventory information, the order provided in the inventory (default).
[Ordering execution based on inventory](https://docs.ansible.com/ansible/latest/user_guide/playbooks_strategies.html#ordering-execution-based-on-inventory)
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
_ansible-inventory_ command ignores inventory order and returns ordered hosts
|
https://github.com/ansible/ansible/issues/73047
|
https://github.com/ansible/ansible/pull/74839
|
1998521e2d5b89bc53d00639bad178330ebb98df
|
5b51b560d0328e35dad5d4c77688f7577081c0ed
| 2020-12-22T12:32:27Z |
python
| 2022-11-30T14:25:34Z |
test/integration/targets/inventory_script/inventory.json
|
{
"None": {
"hosts": [
"DC0_C0_RP0_VM0_cd0681bf-2f18-5c00-9b9b-8197c0095348",
"DC0_C0_RP0_VM1_f7c371d6-2003-5a48-9859-3bc9a8b08908",
"DC0_H0_VM0_265104de-1472-547c-b873-6dc7883fb6cb",
"DC0_H0_VM1_39365506-5a0a-5fd0-be10-9586ad53aaad"
]
},
"_meta": {
"hostvars": {
"DC0_C0_RP0_VM0_cd0681bf-2f18-5c00-9b9b-8197c0095348": {
"alarmactionsenabled": null,
"ansible_host": "None",
"ansible_ssh_host": "None",
"ansible_uuid": "239fb366-6d93-430e-939a-0b6ab272d98f",
"availablefield": [],
"capability": {
"bootoptionssupported": false,
"bootretryoptionssupported": false,
"changetrackingsupported": false,
"consolepreferencessupported": false,
"cpufeaturemasksupported": false,
"disablesnapshotssupported": false,
"diskonlysnapshotonsuspendedvmsupported": null,
"disksharessupported": false,
"dynamicproperty": [],
"dynamictype": null,
"featurerequirementsupported": false,
"guestautolocksupported": false,
"hostbasedreplicationsupported": false,
"locksnapshotssupported": false,
"memoryreservationlocksupported": false,
"memorysnapshotssupported": false,
"multiplecorespersocketsupported": false,
"multiplesnapshotssupported": false,
"nestedhvsupported": false,
"npivwwnonnonrdmvmsupported": false,
"pervmevcsupported": null,
"poweredoffsnapshotssupported": false,
"poweredonmonitortypechangesupported": false,
"quiescedsnapshotssupported": false,
"recordreplaysupported": false,
"reverttosnapshotsupported": false,
"s1acpimanagementsupported": false,
"securebootsupported": null,
"sesparsedisksupported": false,
"settingdisplaytopologysupported": false,
"settingscreenresolutionsupported": false,
"settingvideoramsizesupported": false,
"snapshotconfigsupported": false,
"snapshotoperationssupported": false,
"swapplacementsupported": false,
"toolsautoupdatesupported": false,
"toolssynctimesupported": false,
"virtualexecusageignored": null,
"virtualmmuusageignored": null,
"virtualmmuusagesupported": false,
"vmnpivwwndisablesupported": false,
"vmnpivwwnsupported": false,
"vmnpivwwnupdatesupported": false,
"vpmcsupported": false
},
"config": {
"alternateguestname": "",
"annotation": null,
"bootoptions": null,
"changetrackingenabled": null,
"changeversion": "",
"consolepreferences": null,
"contentlibiteminfo": null,
"cpuaffinity": null,
"cpuallocation": {},
"cpufeaturemask": [],
"cpuhotaddenabled": null,
"cpuhotremoveenabled": null,
"createdate": null,
"datastoreurl": [],
"defaultpowerops": {},
"dynamicproperty": [],
"dynamictype": null,
"extraconfig": [],
"files": {},
"firmware": null,
"flags": {},
"forkconfiginfo": null,
"ftinfo": null,
"guestautolockenabled": null,
"guestfullname": "otherGuest",
"guestid": "otherGuest",
"guestintegrityinfo": null,
"guestmonitoringmodeinfo": null,
"hardware": {},
"hotplugmemoryincrementsize": null,
"hotplugmemorylimit": null,
"initialoverhead": null,
"instanceuuid": "bfff331f-7f07-572d-951e-edd3701dc061",
"keyid": null,
"latencysensitivity": null,
"locationid": null,
"managedby": null,
"maxmksconnections": null,
"memoryaffinity": null,
"memoryallocation": {},
"memoryhotaddenabled": null,
"memoryreservationlockedtomax": null,
"messagebustunnelenabled": null,
"migrateencryption": null,
"modified": {},
"name": "DC0_C0_RP0_VM0",
"nestedhvenabled": null,
"networkshaper": null,
"npivdesirednodewwns": null,
"npivdesiredportwwns": null,
"npivnodeworldwidename": [],
"npivonnonrdmdisks": null,
"npivportworldwidename": [],
"npivtemporarydisabled": null,
"npivworldwidenametype": null,
"repconfig": null,
"scheduledhardwareupgradeinfo": null,
"sgxinfo": null,
"swapplacement": null,
"swapstorageobjectid": null,
"template": false,
"tools": {},
"uuid": "cd0681bf-2f18-5c00-9b9b-8197c0095348",
"vappconfig": null,
"vassertsenabled": null,
"vcpuconfig": [],
"version": "vmx-13",
"vflashcachereservation": null,
"vmstorageobjectid": null,
"vmxconfigchecksum": null,
"vpmcenabled": null
},
"configissue": [],
"configstatus": "green",
"customvalue": [],
"datastore": [
{
"_moId": "/tmp/govcsim-DC0-LocalDS_0-949174843@folder-5",
"name": "LocalDS_0"
}
],
"effectiverole": [
-1
],
"guest": {
"appheartbeatstatus": null,
"appstate": null,
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"generationinfo": [],
"guestfamily": null,
"guestfullname": null,
"guestid": null,
"guestkernelcrashed": null,
"guestoperationsready": null,
"gueststate": "",
"gueststatechangesupported": null,
"hostname": null,
"hwversion": null,
"interactiveguestoperationsready": null,
"ipaddress": null,
"ipstack": [],
"net": [],
"screen": null,
"toolsinstalltype": null,
"toolsrunningstatus": "guestToolsNotRunning",
"toolsstatus": "toolsNotInstalled",
"toolsversion": "0",
"toolsversionstatus": null,
"toolsversionstatus2": null
},
"guestheartbeatstatus": null,
"layout": {
"configfile": [],
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"logfile": [],
"snapshot": [],
"swapfile": null
},
"layoutex": {
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"file": [],
"snapshot": [],
"timestamp": {}
},
"name": "DC0_C0_RP0_VM0",
"network": [],
"overallstatus": "green",
"parentvapp": null,
"permission": [],
"recenttask": [],
"resourcepool": {
"_moId": "resgroup-26",
"name": "Resources"
},
"rootsnapshot": [],
"runtime": {
"boottime": null,
"cleanpoweroff": null,
"connectionstate": "connected",
"consolidationneeded": false,
"cryptostate": null,
"dasvmprotection": null,
"device": [],
"dynamicproperty": [],
"dynamictype": null,
"faulttolerancestate": null,
"featuremask": [],
"featurerequirement": [],
"host": {
"_moId": "host-47",
"name": "DC0_C0_H2"
},
"instantclonefrozen": null,
"maxcpuusage": null,
"maxmemoryusage": null,
"memoryoverhead": null,
"minrequiredevcmodekey": null,
"needsecondaryreason": null,
"nummksconnections": 0,
"offlinefeaturerequirement": [],
"onlinestandby": false,
"paused": null,
"powerstate": "poweredOn",
"question": null,
"quiescedforkparent": null,
"recordreplaystate": null,
"snapshotinbackground": null,
"suspendinterval": null,
"suspendtime": null,
"toolsinstallermounted": false,
"vflashcacheallocation": null
},
"snapshot": null,
"storage": {
"dynamicproperty": [],
"dynamictype": null,
"perdatastoreusage": [],
"timestamp": {}
},
"summary": {
"config": {},
"customvalue": [],
"dynamicproperty": [],
"dynamictype": null,
"guest": {},
"overallstatus": "green",
"quickstats": {},
"runtime": {},
"storage": {},
"vm": {}
},
"tag": [],
"triggeredalarmstate": [],
"value": []
},
"DC0_C0_RP0_VM1_f7c371d6-2003-5a48-9859-3bc9a8b08908": {
"alarmactionsenabled": null,
"ansible_host": "None",
"ansible_ssh_host": "None",
"ansible_uuid": "64b6ca93-f35f-4749-abeb-fc1fabae6c79",
"availablefield": [],
"capability": {
"bootoptionssupported": false,
"bootretryoptionssupported": false,
"changetrackingsupported": false,
"consolepreferencessupported": false,
"cpufeaturemasksupported": false,
"disablesnapshotssupported": false,
"diskonlysnapshotonsuspendedvmsupported": null,
"disksharessupported": false,
"dynamicproperty": [],
"dynamictype": null,
"featurerequirementsupported": false,
"guestautolocksupported": false,
"hostbasedreplicationsupported": false,
"locksnapshotssupported": false,
"memoryreservationlocksupported": false,
"memorysnapshotssupported": false,
"multiplecorespersocketsupported": false,
"multiplesnapshotssupported": false,
"nestedhvsupported": false,
"npivwwnonnonrdmvmsupported": false,
"pervmevcsupported": null,
"poweredoffsnapshotssupported": false,
"poweredonmonitortypechangesupported": false,
"quiescedsnapshotssupported": false,
"recordreplaysupported": false,
"reverttosnapshotsupported": false,
"s1acpimanagementsupported": false,
"securebootsupported": null,
"sesparsedisksupported": false,
"settingdisplaytopologysupported": false,
"settingscreenresolutionsupported": false,
"settingvideoramsizesupported": false,
"snapshotconfigsupported": false,
"snapshotoperationssupported": false,
"swapplacementsupported": false,
"toolsautoupdatesupported": false,
"toolssynctimesupported": false,
"virtualexecusageignored": null,
"virtualmmuusageignored": null,
"virtualmmuusagesupported": false,
"vmnpivwwndisablesupported": false,
"vmnpivwwnsupported": false,
"vmnpivwwnupdatesupported": false,
"vpmcsupported": false
},
"config": {
"alternateguestname": "",
"annotation": null,
"bootoptions": null,
"changetrackingenabled": null,
"changeversion": "",
"consolepreferences": null,
"contentlibiteminfo": null,
"cpuaffinity": null,
"cpuallocation": {},
"cpufeaturemask": [],
"cpuhotaddenabled": null,
"cpuhotremoveenabled": null,
"createdate": null,
"datastoreurl": [],
"defaultpowerops": {},
"dynamicproperty": [],
"dynamictype": null,
"extraconfig": [],
"files": {},
"firmware": null,
"flags": {},
"forkconfiginfo": null,
"ftinfo": null,
"guestautolockenabled": null,
"guestfullname": "otherGuest",
"guestid": "otherGuest",
"guestintegrityinfo": null,
"guestmonitoringmodeinfo": null,
"hardware": {},
"hotplugmemoryincrementsize": null,
"hotplugmemorylimit": null,
"initialoverhead": null,
"instanceuuid": "6132d223-1566-5921-bc3b-df91ece09a4d",
"keyid": null,
"latencysensitivity": null,
"locationid": null,
"managedby": null,
"maxmksconnections": null,
"memoryaffinity": null,
"memoryallocation": {},
"memoryhotaddenabled": null,
"memoryreservationlockedtomax": null,
"messagebustunnelenabled": null,
"migrateencryption": null,
"modified": {},
"name": "DC0_C0_RP0_VM1",
"nestedhvenabled": null,
"networkshaper": null,
"npivdesirednodewwns": null,
"npivdesiredportwwns": null,
"npivnodeworldwidename": [],
"npivonnonrdmdisks": null,
"npivportworldwidename": [],
"npivtemporarydisabled": null,
"npivworldwidenametype": null,
"repconfig": null,
"scheduledhardwareupgradeinfo": null,
"sgxinfo": null,
"swapplacement": null,
"swapstorageobjectid": null,
"template": false,
"tools": {},
"uuid": "f7c371d6-2003-5a48-9859-3bc9a8b08908",
"vappconfig": null,
"vassertsenabled": null,
"vcpuconfig": [],
"version": "vmx-13",
"vflashcachereservation": null,
"vmstorageobjectid": null,
"vmxconfigchecksum": null,
"vpmcenabled": null
},
"configissue": [],
"configstatus": "green",
"customvalue": [],
"datastore": [
{
"_moId": "/tmp/govcsim-DC0-LocalDS_0-949174843@folder-5",
"name": "LocalDS_0"
}
],
"effectiverole": [
-1
],
"guest": {
"appheartbeatstatus": null,
"appstate": null,
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"generationinfo": [],
"guestfamily": null,
"guestfullname": null,
"guestid": null,
"guestkernelcrashed": null,
"guestoperationsready": null,
"gueststate": "",
"gueststatechangesupported": null,
"hostname": null,
"hwversion": null,
"interactiveguestoperationsready": null,
"ipaddress": null,
"ipstack": [],
"net": [],
"screen": null,
"toolsinstalltype": null,
"toolsrunningstatus": "guestToolsNotRunning",
"toolsstatus": "toolsNotInstalled",
"toolsversion": "0",
"toolsversionstatus": null,
"toolsversionstatus2": null
},
"guestheartbeatstatus": null,
"layout": {
"configfile": [],
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"logfile": [],
"snapshot": [],
"swapfile": null
},
"layoutex": {
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"file": [],
"snapshot": [],
"timestamp": {}
},
"name": "DC0_C0_RP0_VM1",
"network": [],
"overallstatus": "green",
"parentvapp": null,
"permission": [],
"recenttask": [],
"resourcepool": {
"_moId": "resgroup-26",
"name": "Resources"
},
"rootsnapshot": [],
"runtime": {
"boottime": null,
"cleanpoweroff": null,
"connectionstate": "connected",
"consolidationneeded": false,
"cryptostate": null,
"dasvmprotection": null,
"device": [],
"dynamicproperty": [],
"dynamictype": null,
"faulttolerancestate": null,
"featuremask": [],
"featurerequirement": [],
"host": {
"_moId": "host-33",
"name": "DC0_C0_H0"
},
"instantclonefrozen": null,
"maxcpuusage": null,
"maxmemoryusage": null,
"memoryoverhead": null,
"minrequiredevcmodekey": null,
"needsecondaryreason": null,
"nummksconnections": 0,
"offlinefeaturerequirement": [],
"onlinestandby": false,
"paused": null,
"powerstate": "poweredOn",
"question": null,
"quiescedforkparent": null,
"recordreplaystate": null,
"snapshotinbackground": null,
"suspendinterval": null,
"suspendtime": null,
"toolsinstallermounted": false,
"vflashcacheallocation": null
},
"snapshot": null,
"storage": {
"dynamicproperty": [],
"dynamictype": null,
"perdatastoreusage": [],
"timestamp": {}
},
"summary": {
"config": {},
"customvalue": [],
"dynamicproperty": [],
"dynamictype": null,
"guest": {},
"overallstatus": "green",
"quickstats": {},
"runtime": {},
"storage": {},
"vm": {}
},
"tag": [],
"triggeredalarmstate": [],
"value": []
},
"DC0_H0_VM0_265104de-1472-547c-b873-6dc7883fb6cb": {
"alarmactionsenabled": null,
"ansible_host": "None",
"ansible_ssh_host": "None",
"ansible_uuid": "6616671b-16b0-494c-8201-737ca506790b",
"availablefield": [],
"capability": {
"bootoptionssupported": false,
"bootretryoptionssupported": false,
"changetrackingsupported": false,
"consolepreferencessupported": false,
"cpufeaturemasksupported": false,
"disablesnapshotssupported": false,
"diskonlysnapshotonsuspendedvmsupported": null,
"disksharessupported": false,
"dynamicproperty": [],
"dynamictype": null,
"featurerequirementsupported": false,
"guestautolocksupported": false,
"hostbasedreplicationsupported": false,
"locksnapshotssupported": false,
"memoryreservationlocksupported": false,
"memorysnapshotssupported": false,
"multiplecorespersocketsupported": false,
"multiplesnapshotssupported": false,
"nestedhvsupported": false,
"npivwwnonnonrdmvmsupported": false,
"pervmevcsupported": null,
"poweredoffsnapshotssupported": false,
"poweredonmonitortypechangesupported": false,
"quiescedsnapshotssupported": false,
"recordreplaysupported": false,
"reverttosnapshotsupported": false,
"s1acpimanagementsupported": false,
"securebootsupported": null,
"sesparsedisksupported": false,
"settingdisplaytopologysupported": false,
"settingscreenresolutionsupported": false,
"settingvideoramsizesupported": false,
"snapshotconfigsupported": false,
"snapshotoperationssupported": false,
"swapplacementsupported": false,
"toolsautoupdatesupported": false,
"toolssynctimesupported": false,
"virtualexecusageignored": null,
"virtualmmuusageignored": null,
"virtualmmuusagesupported": false,
"vmnpivwwndisablesupported": false,
"vmnpivwwnsupported": false,
"vmnpivwwnupdatesupported": false,
"vpmcsupported": false
},
"config": {
"alternateguestname": "",
"annotation": null,
"bootoptions": null,
"changetrackingenabled": null,
"changeversion": "",
"consolepreferences": null,
"contentlibiteminfo": null,
"cpuaffinity": null,
"cpuallocation": {},
"cpufeaturemask": [],
"cpuhotaddenabled": null,
"cpuhotremoveenabled": null,
"createdate": null,
"datastoreurl": [],
"defaultpowerops": {},
"dynamicproperty": [],
"dynamictype": null,
"extraconfig": [],
"files": {},
"firmware": null,
"flags": {},
"forkconfiginfo": null,
"ftinfo": null,
"guestautolockenabled": null,
"guestfullname": "otherGuest",
"guestid": "otherGuest",
"guestintegrityinfo": null,
"guestmonitoringmodeinfo": null,
"hardware": {},
"hotplugmemoryincrementsize": null,
"hotplugmemorylimit": null,
"initialoverhead": null,
"instanceuuid": "b4689bed-97f0-5bcd-8a4c-07477cc8f06f",
"keyid": null,
"latencysensitivity": null,
"locationid": null,
"managedby": null,
"maxmksconnections": null,
"memoryaffinity": null,
"memoryallocation": {},
"memoryhotaddenabled": null,
"memoryreservationlockedtomax": null,
"messagebustunnelenabled": null,
"migrateencryption": null,
"modified": {},
"name": "DC0_H0_VM0",
"nestedhvenabled": null,
"networkshaper": null,
"npivdesirednodewwns": null,
"npivdesiredportwwns": null,
"npivnodeworldwidename": [],
"npivonnonrdmdisks": null,
"npivportworldwidename": [],
"npivtemporarydisabled": null,
"npivworldwidenametype": null,
"repconfig": null,
"scheduledhardwareupgradeinfo": null,
"sgxinfo": null,
"swapplacement": null,
"swapstorageobjectid": null,
"template": false,
"tools": {},
"uuid": "265104de-1472-547c-b873-6dc7883fb6cb",
"vappconfig": null,
"vassertsenabled": null,
"vcpuconfig": [],
"version": "vmx-13",
"vflashcachereservation": null,
"vmstorageobjectid": null,
"vmxconfigchecksum": null,
"vpmcenabled": null
},
"configissue": [],
"configstatus": "green",
"customvalue": [],
"datastore": [
{
"_moId": "/tmp/govcsim-DC0-LocalDS_0-949174843@folder-5",
"name": "LocalDS_0"
}
],
"effectiverole": [
-1
],
"guest": {
"appheartbeatstatus": null,
"appstate": null,
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"generationinfo": [],
"guestfamily": null,
"guestfullname": null,
"guestid": null,
"guestkernelcrashed": null,
"guestoperationsready": null,
"gueststate": "",
"gueststatechangesupported": null,
"hostname": null,
"hwversion": null,
"interactiveguestoperationsready": null,
"ipaddress": null,
"ipstack": [],
"net": [],
"screen": null,
"toolsinstalltype": null,
"toolsrunningstatus": "guestToolsNotRunning",
"toolsstatus": "toolsNotInstalled",
"toolsversion": "0",
"toolsversionstatus": null,
"toolsversionstatus2": null
},
"guestheartbeatstatus": null,
"layout": {
"configfile": [],
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"logfile": [],
"snapshot": [],
"swapfile": null
},
"layoutex": {
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"file": [],
"snapshot": [],
"timestamp": {}
},
"name": "DC0_H0_VM0",
"network": [],
"overallstatus": "green",
"parentvapp": null,
"permission": [],
"recenttask": [],
"resourcepool": {
"_moId": "resgroup-22",
"name": "Resources"
},
"rootsnapshot": [],
"runtime": {
"boottime": null,
"cleanpoweroff": null,
"connectionstate": "connected",
"consolidationneeded": false,
"cryptostate": null,
"dasvmprotection": null,
"device": [],
"dynamicproperty": [],
"dynamictype": null,
"faulttolerancestate": null,
"featuremask": [],
"featurerequirement": [],
"host": {
"_moId": "host-21",
"name": "DC0_H0"
},
"instantclonefrozen": null,
"maxcpuusage": null,
"maxmemoryusage": null,
"memoryoverhead": null,
"minrequiredevcmodekey": null,
"needsecondaryreason": null,
"nummksconnections": 0,
"offlinefeaturerequirement": [],
"onlinestandby": false,
"paused": null,
"powerstate": "poweredOn",
"question": null,
"quiescedforkparent": null,
"recordreplaystate": null,
"snapshotinbackground": null,
"suspendinterval": null,
"suspendtime": null,
"toolsinstallermounted": false,
"vflashcacheallocation": null
},
"snapshot": null,
"storage": {
"dynamicproperty": [],
"dynamictype": null,
"perdatastoreusage": [],
"timestamp": {}
},
"summary": {
"config": {},
"customvalue": [],
"dynamicproperty": [],
"dynamictype": null,
"guest": {},
"overallstatus": "green",
"quickstats": {},
"runtime": {},
"storage": {},
"vm": {}
},
"tag": [],
"triggeredalarmstate": [],
"value": []
},
"DC0_H0_VM1_39365506-5a0a-5fd0-be10-9586ad53aaad": {
"alarmactionsenabled": null,
"ansible_host": "None",
"ansible_ssh_host": "None",
"ansible_uuid": "50401ff9-720a-4166-b9e6-d7cd0d9a4dc9",
"availablefield": [],
"capability": {
"bootoptionssupported": false,
"bootretryoptionssupported": false,
"changetrackingsupported": false,
"consolepreferencessupported": false,
"cpufeaturemasksupported": false,
"disablesnapshotssupported": false,
"diskonlysnapshotonsuspendedvmsupported": null,
"disksharessupported": false,
"dynamicproperty": [],
"dynamictype": null,
"featurerequirementsupported": false,
"guestautolocksupported": false,
"hostbasedreplicationsupported": false,
"locksnapshotssupported": false,
"memoryreservationlocksupported": false,
"memorysnapshotssupported": false,
"multiplecorespersocketsupported": false,
"multiplesnapshotssupported": false,
"nestedhvsupported": false,
"npivwwnonnonrdmvmsupported": false,
"pervmevcsupported": null,
"poweredoffsnapshotssupported": false,
"poweredonmonitortypechangesupported": false,
"quiescedsnapshotssupported": false,
"recordreplaysupported": false,
"reverttosnapshotsupported": false,
"s1acpimanagementsupported": false,
"securebootsupported": null,
"sesparsedisksupported": false,
"settingdisplaytopologysupported": false,
"settingscreenresolutionsupported": false,
"settingvideoramsizesupported": false,
"snapshotconfigsupported": false,
"snapshotoperationssupported": false,
"swapplacementsupported": false,
"toolsautoupdatesupported": false,
"toolssynctimesupported": false,
"virtualexecusageignored": null,
"virtualmmuusageignored": null,
"virtualmmuusagesupported": false,
"vmnpivwwndisablesupported": false,
"vmnpivwwnsupported": false,
"vmnpivwwnupdatesupported": false,
"vpmcsupported": false
},
"config": {
"alternateguestname": "",
"annotation": null,
"bootoptions": null,
"changetrackingenabled": null,
"changeversion": "",
"consolepreferences": null,
"contentlibiteminfo": null,
"cpuaffinity": null,
"cpuallocation": {},
"cpufeaturemask": [],
"cpuhotaddenabled": null,
"cpuhotremoveenabled": null,
"createdate": null,
"datastoreurl": [],
"defaultpowerops": {},
"dynamicproperty": [],
"dynamictype": null,
"extraconfig": [],
"files": {},
"firmware": null,
"flags": {},
"forkconfiginfo": null,
"ftinfo": null,
"guestautolockenabled": null,
"guestfullname": "otherGuest",
"guestid": "otherGuest",
"guestintegrityinfo": null,
"guestmonitoringmodeinfo": null,
"hardware": {},
"hotplugmemoryincrementsize": null,
"hotplugmemorylimit": null,
"initialoverhead": null,
"instanceuuid": "12f8928d-f144-5c57-89db-dd2d0902c9fa",
"keyid": null,
"latencysensitivity": null,
"locationid": null,
"managedby": null,
"maxmksconnections": null,
"memoryaffinity": null,
"memoryallocation": {},
"memoryhotaddenabled": null,
"memoryreservationlockedtomax": null,
"messagebustunnelenabled": null,
"migrateencryption": null,
"modified": {},
"name": "DC0_H0_VM1",
"nestedhvenabled": null,
"networkshaper": null,
"npivdesirednodewwns": null,
"npivdesiredportwwns": null,
"npivnodeworldwidename": [],
"npivonnonrdmdisks": null,
"npivportworldwidename": [],
"npivtemporarydisabled": null,
"npivworldwidenametype": null,
"repconfig": null,
"scheduledhardwareupgradeinfo": null,
"sgxinfo": null,
"swapplacement": null,
"swapstorageobjectid": null,
"template": false,
"tools": {},
"uuid": "39365506-5a0a-5fd0-be10-9586ad53aaad",
"vappconfig": null,
"vassertsenabled": null,
"vcpuconfig": [],
"version": "vmx-13",
"vflashcachereservation": null,
"vmstorageobjectid": null,
"vmxconfigchecksum": null,
"vpmcenabled": null
},
"configissue": [],
"configstatus": "green",
"customvalue": [],
"datastore": [
{
"_moId": "/tmp/govcsim-DC0-LocalDS_0-949174843@folder-5",
"name": "LocalDS_0"
}
],
"effectiverole": [
-1
],
"guest": {
"appheartbeatstatus": null,
"appstate": null,
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"generationinfo": [],
"guestfamily": null,
"guestfullname": null,
"guestid": null,
"guestkernelcrashed": null,
"guestoperationsready": null,
"gueststate": "",
"gueststatechangesupported": null,
"hostname": null,
"hwversion": null,
"interactiveguestoperationsready": null,
"ipaddress": null,
"ipstack": [],
"net": [],
"screen": null,
"toolsinstalltype": null,
"toolsrunningstatus": "guestToolsNotRunning",
"toolsstatus": "toolsNotInstalled",
"toolsversion": "0",
"toolsversionstatus": null,
"toolsversionstatus2": null
},
"guestheartbeatstatus": null,
"layout": {
"configfile": [],
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"logfile": [],
"snapshot": [],
"swapfile": null
},
"layoutex": {
"disk": [],
"dynamicproperty": [],
"dynamictype": null,
"file": [],
"snapshot": [],
"timestamp": {}
},
"name": "DC0_H0_VM1",
"network": [],
"overallstatus": "green",
"parentvapp": null,
"permission": [],
"recenttask": [],
"resourcepool": {
"_moId": "resgroup-22",
"name": "Resources"
},
"rootsnapshot": [],
"runtime": {
"boottime": null,
"cleanpoweroff": null,
"connectionstate": "connected",
"consolidationneeded": false,
"cryptostate": null,
"dasvmprotection": null,
"device": [],
"dynamicproperty": [],
"dynamictype": null,
"faulttolerancestate": null,
"featuremask": [],
"featurerequirement": [],
"host": {
"_moId": "host-21",
"name": "DC0_H0"
},
"instantclonefrozen": null,
"maxcpuusage": null,
"maxmemoryusage": null,
"memoryoverhead": null,
"minrequiredevcmodekey": null,
"needsecondaryreason": null,
"nummksconnections": 0,
"offlinefeaturerequirement": [],
"onlinestandby": false,
"paused": null,
"powerstate": "poweredOn",
"question": null,
"quiescedforkparent": null,
"recordreplaystate": null,
"snapshotinbackground": null,
"suspendinterval": null,
"suspendtime": null,
"toolsinstallermounted": false,
"vflashcacheallocation": null
},
"snapshot": null,
"storage": {
"dynamicproperty": [],
"dynamictype": null,
"perdatastoreusage": [],
"timestamp": {}
},
"summary": {
"config": {},
"customvalue": [],
"dynamicproperty": [],
"dynamictype": null,
"guest": {},
"overallstatus": "green",
"quickstats": {},
"runtime": {},
"storage": {},
"vm": {}
},
"tag": [],
"triggeredalarmstate": [],
"value": []
}
}
},
"all": {
"children": [
"None",
"guests",
"ungrouped"
]
},
"guests": {
"hosts": [
"DC0_C0_RP0_VM0_cd0681bf-2f18-5c00-9b9b-8197c0095348",
"DC0_C0_RP0_VM1_f7c371d6-2003-5a48-9859-3bc9a8b08908",
"DC0_H0_VM0_265104de-1472-547c-b873-6dc7883fb6cb",
"DC0_H0_VM1_39365506-5a0a-5fd0-be10-9586ad53aaad"
]
}
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,564 |
SELinux filesystem unit test fails with: FileNotFoundError
|
### Summary
When I run the unit test suite in ansible-core from the devel branch on Github (94c910), a single test fails in the `test/units/module_utils/basic/test_filesystem.py` module: `TestOtherFilesystem.test_module_utils_basic_ansible_module_set_directory_attributes_if_different`
```
def _check_rc(rc):
if rc < 0:
errno = get_errno()
> raise OSError(errno, os.strerror(errno))
E FileNotFoundError: [Errno 2] No such file or directory
lib/ansible/module_utils/compat/selinux.py:23: FileNotFoundError
```
### Issue Type
Bug Report
### Component Name
selinux
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (devel 94c9106153) last updated 2022/04/15 15:03:57 (GMT +200)
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/rdiscala/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/rdiscala/src/ansible/lib/ansible
ansible collection location = /home/rdiscala/.ansible/collections:/usr/share/ansible/collections
executable location = /home/rdiscala/src/ansible/bin/ansible
python version = 3.10.0 (default, Oct 4 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
<< NO OUTPUT >>
```
### OS / Environment
$ cat /etc/redhat-release
Fedora release 35 (Thirty Five)
### Steps to Reproduce
$ ansible-test units --venv --python 3.10
<< OR>>
$ pytest -r a --cov=. --cov-report=html --fulltrace --color yes test/units/module_utils/basic/test_filesystem.py
### Expected Results
All tests pass
### Actual Results
```console
===================================================================== test session starts =====================================================================
platform linux -- Python 3.10.0, pytest-6.2.5, py-1.11.0, pluggy-0.13.1
rootdir: /home/rdiscala/src/ansible
plugins: forked-1.4.0, xdist-1.34.0, mock-3.2.0, cov-3.0.0
collected 5 items
test/units/module_utils/basic/test_filesystem.py .F... [100%]
========================================================================== FAILURES ===========================================================================
______________________________ TestOtherFilesystem.test_module_utils_basic_ansible_module_set_directory_attributes_if_different _______________________________
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, path = '/path/to/file'
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
> ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
lib/ansible/module_utils/basic.py:687:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
path = '/path/to/file'
def lgetfilecon_raw(path):
con = c_char_p()
try:
> rc = _selinux_lib.lgetfilecon_raw(path, byref(con))
lib/ansible/module_utils/compat/selinux.py:95:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
rc = -1
def _check_rc(rc):
if rc < 0:
errno = get_errno()
> raise OSError(errno, os.strerror(errno))
E FileNotFoundError: [Errno 2] No such file or directory
lib/ansible/module_utils/compat/selinux.py:23: FileNotFoundError
During handling of the above exception, another exception occurred:
self = <unittest.case._Outcome object at 0x7fcabf509810>
test_case = <units.module_utils.basic.test_filesystem.TestOtherFilesystem testMethod=test_module_utils_basic_ansible_module_set_directory_attributes_if_different>
isTest = True
@contextlib.contextmanager
def testPartExecutor(self, test_case, isTest=False):
old_success = self.success
self.success = True
try:
> yield
/usr/lib64/python3.10/unittest/case.py:59:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <units.module_utils.basic.test_filesystem.TestOtherFilesystem testMethod=test_module_utils_basic_ansible_module_set_directory_attributes_if_different>
result = <TestCaseFunction test_module_utils_basic_ansible_module_set_directory_attributes_if_different>
def run(self, result=None):
if result is None:
result = self.defaultTestResult()
startTestRun = getattr(result, 'startTestRun', None)
stopTestRun = getattr(result, 'stopTestRun', None)
if startTestRun is not None:
startTestRun()
else:
stopTestRun = None
result.startTest(self)
try:
testMethod = getattr(self, self._testMethodName)
if (getattr(self.__class__, "__unittest_skip__", False) or
getattr(testMethod, "__unittest_skip__", False)):
# If the class or method was skipped.
skip_why = (getattr(self.__class__, '__unittest_skip_why__', '')
or getattr(testMethod, '__unittest_skip_why__', ''))
self._addSkip(result, self, skip_why)
return result
expecting_failure = (
getattr(self, "__unittest_expecting_failure__", False) or
getattr(testMethod, "__unittest_expecting_failure__", False)
)
outcome = _Outcome(result)
try:
self._outcome = outcome
with outcome.testPartExecutor(self):
self._callSetUp()
if outcome.success:
outcome.expecting_failure = expecting_failure
with outcome.testPartExecutor(self, isTest=True):
> self._callTestMethod(testMethod)
/usr/lib64/python3.10/unittest/case.py:591:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <units.module_utils.basic.test_filesystem.TestOtherFilesystem testMethod=test_module_utils_basic_ansible_module_set_directory_attributes_if_different>
method = <bound method TestOtherFilesystem.test_module_utils_basic_ansible_module_set_directory_attributes_if_different of <uni...ilesystem.TestOtherFilesystem testMethod=test_module_utils_basic_ansible_module_set_directory_attributes_if_different>>
def _callTestMethod(self, method):
> method()
/usr/lib64/python3.10/unittest/case.py:549:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <units.module_utils.basic.test_filesystem.TestOtherFilesystem testMethod=test_module_utils_basic_ansible_module_set_directory_attributes_if_different>
def test_module_utils_basic_ansible_module_set_directory_attributes_if_different(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
file_args = {
'path': '/path/to/file',
'mode': None,
'owner': None,
'group': None,
'seuser': None,
'serole': None,
'setype': None,
'selevel': None,
'secontext': [None, None, None],
'attributes': None,
}
> self.assertEqual(am.set_directory_attributes_if_different(file_args, True), True)
test/units/module_utils/basic/test_filesystem.py:159:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, file_args = {'attributes': None, 'group': None, 'mode': None, 'owner': None, ...}
changed = True, diff = None, expand = True
def set_directory_attributes_if_different(self, file_args, changed, diff=None, expand=True):
> return self.set_fs_attributes_if_different(file_args, changed, diff, expand)
lib/ansible/module_utils/basic.py:1184:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, file_args = {'attributes': None, 'group': None, 'mode': None, 'owner': None, ...}
changed = True, diff = None, expand = True
def set_fs_attributes_if_different(self, file_args, changed, diff=None, expand=True):
# set modes owners and context as needed
> changed = self.set_context_if_different(
file_args['path'], file_args['secontext'], changed, diff
)
lib/ansible/module_utils/basic.py:1163:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, path = '/path/to/file', context = [None, None, None], changed = True, diff = None
def set_context_if_different(self, path, context, changed, diff=None):
if not self.selinux_enabled():
return changed
if self.check_file_absent_if_check_mode(path):
return True
> cur_context = self.selinux_context(path)
lib/ansible/module_utils/basic.py:761:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, path = '/path/to/file'
def selinux_context(self, path):
context = self.selinux_initial_context()
if not self.selinux_enabled():
return context
try:
ret = selinux.lgetfilecon_raw(to_native(path, errors='surrogate_or_strict'))
except OSError as e:
if e.errno == errno.ENOENT:
> self.fail_json(path=path, msg='path %s does not exist' % path)
lib/ansible/module_utils/basic.py:690:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <ansible.module_utils.basic.AnsibleModule object at 0x7fcabf50b0d0>, msg = 'path /path/to/file does not exist'
kwargs = {'failed': True, 'invocation': {'module_args': {}}, 'msg': 'path /path/to/file does not exist', 'path': '/path/to/file'}
def fail_json(self, msg, **kwargs):
''' return from the module, with an error message '''
kwargs['failed'] = True
kwargs['msg'] = msg
# Add traceback if debug or high verbosity and it is missing
# NOTE: Badly named as exception, it really always has been a traceback
if 'exception' not in kwargs and sys.exc_info()[2] and (self._debug or self._verbosity >= 3):
if PY2:
# On Python 2 this is the last (stack frame) exception and as such may be unrelated to the failure
kwargs['exception'] = 'WARNING: The below traceback may *not* be related to the actual failure.\n' +\
''.join(traceback.format_tb(sys.exc_info()[2]))
else:
kwargs['exception'] = ''.join(traceback.format_tb(sys.exc_info()[2]))
self.do_cleanup_files()
self._return_formatted(kwargs)
> sys.exit(1)
E SystemExit: 1
lib/ansible/module_utils/basic.py:1533: SystemExit
-------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------
{"path": "/path/to/file", "failed": true, "msg": "path /path/to/file does not exist", "invocation": {"module_args": {}}}
=================================================================== short test summary info ===================================================================
FAILED test/units/module_utils/basic/test_filesystem.py::TestOtherFilesystem::test_module_utils_basic_ansible_module_set_directory_attributes_if_different
================================================================= 1 failed, 4 passed in 0.19s =================================================================
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77564
|
https://github.com/ansible/ansible/pull/79510
|
042a55fbe01a723b165e694b26940ef672cf1da0
|
9acca5b3b9f954faee1347866d0312eb0ba3ef66
| 2022-04-19T13:32:08Z |
python
| 2022-12-01T19:23:08Z |
test/units/module_utils/basic/test_filesystem.py
|
# -*- coding: utf-8 -*-
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2016 Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
from units.mock.procenv import ModuleTestCase
from units.compat.mock import patch, MagicMock
from ansible.module_utils.six.moves import builtins
realimport = builtins.__import__
class TestOtherFilesystem(ModuleTestCase):
def test_module_utils_basic_ansible_module_user_and_group(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
mock_stat = MagicMock()
mock_stat.st_uid = 0
mock_stat.st_gid = 0
with patch('os.lstat', return_value=mock_stat):
self.assertEqual(am.user_and_group('/path/to/file'), (0, 0))
def test_module_utils_basic_ansible_module_find_mount_point(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
def _mock_ismount(path):
if path == b'/':
return True
return False
with patch('os.path.ismount', side_effect=_mock_ismount):
self.assertEqual(am.find_mount_point('/root/fs/../mounted/path/to/whatever'), '/')
def _mock_ismount(path):
if path == b'/subdir/mount':
return True
if path == b'/':
return True
return False
with patch('os.path.ismount', side_effect=_mock_ismount):
self.assertEqual(am.find_mount_point('/subdir/mount/path/to/whatever'), '/subdir/mount')
def test_module_utils_basic_ansible_module_set_owner_if_different(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
self.assertEqual(am.set_owner_if_different('/path/to/file', None, True), True)
self.assertEqual(am.set_owner_if_different('/path/to/file', None, False), False)
am.user_and_group = MagicMock(return_value=(500, 500))
with patch('os.lchown', return_value=None) as m:
self.assertEqual(am.set_owner_if_different('/path/to/file', 0, False), True)
m.assert_called_with(b'/path/to/file', 0, -1)
def _mock_getpwnam(*args, **kwargs):
mock_pw = MagicMock()
mock_pw.pw_uid = 0
return mock_pw
m.reset_mock()
with patch('pwd.getpwnam', side_effect=_mock_getpwnam):
self.assertEqual(am.set_owner_if_different('/path/to/file', 'root', False), True)
m.assert_called_with(b'/path/to/file', 0, -1)
with patch('pwd.getpwnam', side_effect=KeyError):
self.assertRaises(SystemExit, am.set_owner_if_different, '/path/to/file', 'root', False)
m.reset_mock()
am.check_mode = True
self.assertEqual(am.set_owner_if_different('/path/to/file', 0, False), True)
self.assertEqual(m.called, False)
am.check_mode = False
with patch('os.lchown', side_effect=OSError) as m:
self.assertRaises(SystemExit, am.set_owner_if_different, '/path/to/file', 'root', False)
def test_module_utils_basic_ansible_module_set_group_if_different(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
self.assertEqual(am.set_group_if_different('/path/to/file', None, True), True)
self.assertEqual(am.set_group_if_different('/path/to/file', None, False), False)
am.user_and_group = MagicMock(return_value=(500, 500))
with patch('os.lchown', return_value=None) as m:
self.assertEqual(am.set_group_if_different('/path/to/file', 0, False), True)
m.assert_called_with(b'/path/to/file', -1, 0)
def _mock_getgrnam(*args, **kwargs):
mock_gr = MagicMock()
mock_gr.gr_gid = 0
return mock_gr
m.reset_mock()
with patch('grp.getgrnam', side_effect=_mock_getgrnam):
self.assertEqual(am.set_group_if_different('/path/to/file', 'root', False), True)
m.assert_called_with(b'/path/to/file', -1, 0)
with patch('grp.getgrnam', side_effect=KeyError):
self.assertRaises(SystemExit, am.set_group_if_different, '/path/to/file', 'root', False)
m.reset_mock()
am.check_mode = True
self.assertEqual(am.set_group_if_different('/path/to/file', 0, False), True)
self.assertEqual(m.called, False)
am.check_mode = False
with patch('os.lchown', side_effect=OSError) as m:
self.assertRaises(SystemExit, am.set_group_if_different, '/path/to/file', 'root', False)
def test_module_utils_basic_ansible_module_set_directory_attributes_if_different(self):
from ansible.module_utils import basic
basic._ANSIBLE_ARGS = None
am = basic.AnsibleModule(
argument_spec=dict(),
)
file_args = {
'path': '/path/to/file',
'mode': None,
'owner': None,
'group': None,
'seuser': None,
'serole': None,
'setype': None,
'selevel': None,
'secontext': [None, None, None],
'attributes': None,
}
self.assertEqual(am.set_directory_attributes_if_different(file_args, True), True)
self.assertEqual(am.set_directory_attributes_if_different(file_args, False), False)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,523 |
apt module breaks with strange cache error using python3
|
### Summary
This is the successor of #75262 as I am not able to comment there anymore.
How is the state of the annoing bug? It is still not fixed and prevents all plays to run!
### Issue Type
Bug Report
### Component Name
lib/ansible/modules/apt.py
### Ansible Version
```console
$ ansible --version
> ansible --version
ERROR: Ansible requires the locale encoding to be UTF-8; Detected ISO8859-1.
ikki: ?1 !1021
> LC_ALL=C.UTF-8 ansible --version
ansible [core 2.14.0]
config file = /home/klaus/.ansible.cfg
configured module search path = ['/home/klaus/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3/dist-packages/ansible
ansible collection location = /home/klaus/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.10.8 (main, Nov 4 2022, 09:21:25) [GCC 12.2.0] (/usr/bin/python3)
jinja version = 3.0.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
> ansible-config dump --only-changed -t all
ERROR: Ansible requires the locale encoding to be UTF-8; Detected ISO8859-1.
```
### OS / Environment
Devuan
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: I need patch here
package:
name: patch
state: present
```
### Expected Results
It works
### Actual Results
```console
TASK [debianfix : I need patch here] ***************************************************************************
fatal: [chil]: FAILED! => {"changed": false, "msg": "<class 'apt_pkg.Cache'> returned a result with an exception set"}
```
### Code of Conduct
- [ ] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79523
|
https://github.com/ansible/ansible/pull/79546
|
527abba86010629e21f8227c4234c393e4ee8122
|
11e43e9d6e9809ca8fdf56f814b89da3dc0d5659
| 2022-12-03T16:53:26Z |
python
| 2022-12-08T19:06:08Z |
changelogs/fragments/79546-apt-fix-setting-locale.yml
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.