status
stringclasses 1
value | repo_name
stringclasses 31
values | repo_url
stringclasses 31
values | issue_id
int64 1
104k
| title
stringlengths 4
369
| body
stringlengths 0
254k
⌀ | issue_url
stringlengths 37
56
| pull_url
stringlengths 37
54
| before_fix_sha
stringlengths 40
40
| after_fix_sha
stringlengths 40
40
| report_datetime
timestamp[us, tz=UTC] | language
stringclasses 5
values | commit_datetime
timestamp[us, tz=UTC] | updated_file
stringlengths 4
188
| file_content
stringlengths 0
5.12M
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/ansible_index.rst
|
.. _ansible_documentation:
..
This is the index file for Ansible the package. It gets symlinked to index.rst by the Makefile
Ansible Documentation
=====================
About Ansible
`````````````
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Ansible's main goals are simplicity and ease-of-use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans--even those not familiar with the program.
We believe simplicity is relevant to all sizes of environments, so we design for busy users of all types: developers, sysadmins, release engineers, IT managers, and everyone in between. Ansible is appropriate for managing all environments, from small setups with a handful of instances to enterprise environments with many thousands of instances.
You can learn more at `AnsibleFest <https://www.ansible.com/ansiblefest>`_, the annual event for all Ansible contributors, users, and customers hosted by Red Hat. AnsibleFest is the place to connect with others, learn new skills, and find a new friend to automate with.
Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Also, security exposure is greatly reduced because Ansible uses OpenSSH — the open source connectivity tool for remote login with the SSH (Secure Shell) protocol.
Ansible is decentralized--it relies on your existing OS credentials to control access to remote machines. And if needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the version of Ansible noted in the upper left corner of this page. We maintain multiple versions of Ansible and the Ansible documentation, so please be sure you are using the documentation version that covers the version of Ansible you are using. For recent features, we note the version of Ansible where the feature was added.
Ansible releases a new major release approximately twice a year. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. Contributors develop and change modules and plugins hosted in collections since version 2.10 much more quickly.
.. toctree::
:maxdepth: 2
:caption: Ansible getting started
getting_started/index
.. toctree::
:maxdepth: 2
:caption: Installation, Upgrade & Configuration
installation_guide/index
porting_guides/porting_guides
.. toctree::
:maxdepth: 2
:caption: Using Ansible
inventory_guide/index
command_guide/index
playbook_guide/index
vault_guide/index
module_plugin_guide/index
collections_guide/index
os_guide/index
tips_tricks/index
.. toctree::
:maxdepth: 2
:caption: Contributing to Ansible
community/index
community/contributions_collections
community/contributions
community/advanced_index
dev_guide/style_guide/index
.. toctree::
:maxdepth: 2
:caption: Extending Ansible
dev_guide/index
.. toctree::
:glob:
:maxdepth: 1
:caption: Common Ansible Scenarios
scenario_guides/cloud_guides
scenario_guides/network_guides
scenario_guides/virt_guides
.. toctree::
:maxdepth: 2
:caption: Network Automation
network/getting_started/index
network/user_guide/index
network/dev_guide/index
.. toctree::
:maxdepth: 2
:caption: Ansible Galaxy
galaxy/user_guide.rst
galaxy/dev_guide.rst
.. toctree::
:maxdepth: 1
:caption: Reference & Appendices
reference_appendices/playbooks_keywords
reference_appendices/common_return_values
reference_appendices/config
reference_appendices/general_precedence
reference_appendices/YAMLSyntax
reference_appendices/python_3_support
reference_appendices/interpreter_discovery
reference_appendices/release_and_maintenance
reference_appendices/test_strategies
dev_guide/testing/sanity/index
reference_appendices/faq
reference_appendices/glossary
reference_appendices/module_utils
reference_appendices/special_variables
reference_appendices/tower
reference_appendices/automationhub
reference_appendices/logging
.. toctree::
:maxdepth: 2
:caption: Roadmaps
roadmap/ansible_roadmap_index.rst
roadmap/ansible_core_roadmap_index.rst
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/collections_guide/collections_index.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/collections_guide/index.rst
|
.. _collections_index:
.. _collections:
#########################
Using Ansible collections
#########################
.. note::
**Making Open Source More Inclusive**
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. We ask that you open an issue or pull request if you come upon a term that we have missed. For more details, see `our CTO Chris Wright's message <https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language>`_.
Welcome to the Ansible guide for working with collections.
Collections are a distribution format for Ansible content that can include playbooks, roles, modules, and plugins.
You can install and use collections through a distribution server, such as Ansible Galaxy, or a Pulp 3 Galaxy server.
.. toctree::
:maxdepth: 2
collections_installing
collections_downloading
collections_listing
collections_verifying
collections_using_playbooks
../collections/index
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/core_index.rst
|
.. _ansible_core_documentation:
..
This is the index file for ansible-core. It gets symlinked to index.rst by the Makefile
**************************
Ansible Core Documentation
**************************
About ansible-core
===================
Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
Ansible core, or ``ansible-core`` is the main building block and architecture for Ansible, and includes:
* CLI tools such as ``ansible-playbook``, ``ansible-doc``. and others for driving and interacting with automation.
* The Ansible language that uses YAML to create a set of rules for developing Ansible Playbooks and includes functions such as conditionals, blocks, includes, loops, and other Ansible imperatives.
* An architectural framework that allows extensions through Ansible collections.
Ansible's main goals are simplicity and ease-of-use. It also has a strong focus on security and reliability, featuring a minimum of moving parts, usage of OpenSSH for transport (with other transports and pull modes as alternatives), and a language that is designed around auditability by humans--even those not familiar with the program.
We believe simplicity is relevant to all sizes of environments, so we design for busy users of all types: developers, sysadmins, release engineers, IT managers, and everyone in between. Ansible is appropriate for managing all environments, from small setups with a handful of instances to enterprise environments with many thousands of instances.
You can learn more at `AnsibleFest <https://www.ansible.com/ansiblefest>`_, the annual event for all Ansible contributors, users, and customers hosted by Red Hat. AnsibleFest is the place to connect with others, learn new skills, and find a new friend to automate with.
Ansible manages machines in an agent-less manner. There is never a question of how to upgrade remote daemons or the problem of not being able to manage systems because daemons are uninstalled. Because OpenSSH is one of the most peer-reviewed open source components, security exposure is greatly reduced. Ansible is decentralized--it relies on your existing OS credentials to control access to remote machines. If needed, Ansible can easily connect with Kerberos, LDAP, and other centralized authentication management systems.
This documentation covers the version of ``ansible-core`` noted in the upper left corner of this page. We maintain multiple versions of ``ansible-core`` and of the documentation, so please be sure you are using the version of the documentation that covers the version of Ansible you're using. For recent features, we note the version of Ansible where the feature was added.
``ansible-core`` releases a new major release approximately twice a year. The core application evolves somewhat conservatively, valuing simplicity in language design and setup. Contributors develop and change modules and plugins, hosted in collections since version 2.10, much more quickly.
.. toctree::
:maxdepth: 2
:caption: Ansible getting started
getting_started/index
.. toctree::
:maxdepth: 2
:caption: Installation, Upgrade & Configuration
installation_guide/index
porting_guides/core_porting_guides
.. toctree::
:maxdepth: 2
:caption: Using Ansible Core
inventory_guide/index
command_guide/index
playbook_guide/index
vault_guide/index
module_plugin_guide/index
collections_guide/index
os_guide/index
tips_tricks/index
.. toctree::
:maxdepth: 2
:caption: Contributing to Ansible Core
community/index
community/contributions
community/advanced_index
dev_guide/style_guide/index
.. toctree::
:maxdepth: 2
:caption: Extending Ansible
dev_guide/index
.. toctree::
:maxdepth: 1
:caption: Reference & Appendices
reference_appendices/playbooks_keywords
reference_appendices/common_return_values
reference_appendices/config
reference_appendices/general_precedence
reference_appendices/YAMLSyntax
reference_appendices/python_3_support
reference_appendices/interpreter_discovery
reference_appendices/release_and_maintenance
reference_appendices/test_strategies
dev_guide/testing/sanity/index
reference_appendices/faq
reference_appendices/glossary
reference_appendices/module_utils
reference_appendices/special_variables
reference_appendices/tower
reference_appendices/automationhub
reference_appendices/logging
.. toctree::
:maxdepth: 2
:caption: Roadmaps
roadmap/ansible_core_roadmap_index.rst
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/module_plugin_guide/index.rst
|
.. _modules_plugins_index:
#################################
Using Ansible modules and plugins
#################################
.. note::
**Making Open Source More Inclusive**
Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. We ask that you open an issue or pull request if you come upon a term that we have missed. For more details, see `our CTO Chris Wright's message <https://www.redhat.com/en/blog/making-open-source-more-inclusive-eradicating-problematic-language>`_.
Welcome to the Ansible guide for working with modules, plugins, and collections.
Ansible modules are units of code that can control system resources or execute system commands.
Ansible provides a module library that you can execute directly on remote hosts or through playbooks.
You can also write custom modules.
Similar to modules are plugins, which are pieces of code that extend core Ansible functionality.
Ansible uses a plugin architecture to enable a rich, flexible, and expandable feature set.
Ansible ships with several plugins and lets you easily use your own plugins.
.. toctree::
:maxdepth: 2
modules_intro
modules_support
plugin_filtering_config
../plugins/plugins
../collections/all_plugins
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,760 |
ansible-core docs site missing navigation for common pages
|
### Summary
Here are a few examples missing:
1. REFERENCE & APPENDICES -> Collection Index
2. REFERENCE & APPENDICES -> Indexes of all modules and plugins
3. The entire "ANSIBLE GALAXY" section
Links generated by core always link to the `ansible-core/` docs. With these links and sections missing, the docs site is missing critical information.
Note: I didn't check all versions of ansible-core, specifically was using `devel`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/core_index.rst
### Ansible Version
```console
$ ansible --version
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Additional Information
N/A
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78760
|
https://github.com/ansible/ansible/pull/78763
|
563f3ecc11e9fc9ec9995186409de9dcae038d80
|
540442db2eb3d3c02ca750143571d0e9c766df3a
| 2022-09-13T15:13:15Z |
python
| 2022-09-13T18:43:21Z |
docs/docsite/rst/module_plugin_guide/modules_plugins_index.rst
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,600 |
scp_if_ssh not working as intended with OpenSSH since version 9.0
|
### Summary
The option `scp_if_ssh = true` is used to force Ansible to use scp instead of sftp on targets, that don't support sftp. However since OpenSSH 9.0 (8.8 on Arch Linux it seems) even the scp utility defaults to using sftp. The old behavior can be enabled by additionally setting `scp_extra_args = "-O"` to force scp to use the old protocol.
I recognize that this is not an Ansible bug, but it may break documented and expected behavior.
OpenSSH Changelog: https://www.openssh.com/txt/release-9.0
> This release switches scp(1) from using the legacy scp/rcp protocol to using the SFTP protocol by default.
### Issue Type
~Bug Report~
Documentation Report
### Component Name
connection, ssh, scp
### Ansible Version
```console
ansible [core 2.13.2]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONNECTION:
==========
ssh:
___
scp_extra_args(env: ANSIBLE_SCP_EXTRA_ARGS) = -O
scp_if_ssh(env: ANSIBLE_SCP_IF_SSH) = true
```
### OS / Environment
Debian Sid
### Steps to Reproduce
configure sshd to not offer sftp. (eg. delete `Subsystem sftp /usr/lib/ssh/sftp-server` from `/etc/ssh/sshd_config` and restart)
create a small example playbook, contents are irrelevant
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: true
remote_user: root
tasks:
- name: install a nonexistant package
package:
name:
- less-is-more
```
execute wit Ansible configuration or environment setting to use scp:
```
export ANSIBLE_SCP_IF_SSH=false
ansible-playbook -c ssh playbook.yml
```
### Expected Results
```
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]
TASK [install a nonexistant package] *******************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "No package matching 'less-is-more' is available"}
PLAY RECAP *********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
with only `scp_if_ssh`:
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via scp: scp: Connection closed\r\n", "unreachable": true}
PLAY RECAP *********************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
with additional setting to acc `-O`to scp (working correctly):
```
ansible@instance:~$ export ANSIBLE_SCP_EXTRA_ARGS="-O"
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]
TASK [install a nonexistant package] *******************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "No package matching 'less-is-more' is available"}
PLAY RECAP *********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78600
|
https://github.com/ansible/ansible/pull/78745
|
d4407ca68822b1f12254957ec9918f94c23d374f
|
952ee88f33de4d49ff5f7bd4bec3431a4b0fdc78
| 2022-08-19T20:18:39Z |
python
| 2022-09-15T15:03:09Z |
lib/ansible/plugins/connection/ssh.py
|
# Copyright (c) 2012, Michael DeHaan <[email protected]>
# Copyright 2015 Abhijit Menon-Sen <[email protected]>
# Copyright 2017 Toshio Kuratomi <[email protected]>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
DOCUMENTATION = '''
name: ssh
short_description: connect via SSH client binary
description:
- This connection plugin allows Ansible to communicate to the target machines through normal SSH command line.
- Ansible does not expose a channel to allow communication between the user and the SSH process to accept
a password manually to decrypt an SSH key when using this connection plugin (which is the default). The
use of C(ssh-agent) is highly recommended.
author: ansible (@core)
extends_documentation_fragment:
- connection_pipelining
version_added: historical
notes:
- Many options default to C(None) here but that only means we do not override the SSH tool's defaults and/or configuration.
For example, if you specify the port in this plugin it will override any C(Port) entry in your C(.ssh/config).
- The ssh CLI tool uses return code 255 as a 'connection error', this can conflict with commands/tools that
also return 255 as an error code and will look like an 'unreachable' condition or 'connection error' to this plugin.
options:
host:
description: Hostname/IP to connect to.
default: inventory_hostname
vars:
- name: inventory_hostname
- name: ansible_host
- name: ansible_ssh_host
- name: delegated_vars['ansible_host']
- name: delegated_vars['ansible_ssh_host']
host_key_checking:
description: Determines if SSH should check host keys.
default: True
type: boolean
ini:
- section: defaults
key: 'host_key_checking'
- section: ssh_connection
key: 'host_key_checking'
version_added: '2.5'
env:
- name: ANSIBLE_HOST_KEY_CHECKING
- name: ANSIBLE_SSH_HOST_KEY_CHECKING
version_added: '2.5'
vars:
- name: ansible_host_key_checking
version_added: '2.5'
- name: ansible_ssh_host_key_checking
version_added: '2.5'
password:
description: Authentication password for the C(remote_user). Can be supplied as CLI option.
vars:
- name: ansible_password
- name: ansible_ssh_pass
- name: ansible_ssh_password
sshpass_prompt:
description:
- Password prompt that sshpass should search for. Supported by sshpass 1.06 and up.
- Defaults to C(Enter PIN for) when pkcs11_provider is set.
default: ''
ini:
- section: 'ssh_connection'
key: 'sshpass_prompt'
env:
- name: ANSIBLE_SSHPASS_PROMPT
vars:
- name: ansible_sshpass_prompt
version_added: '2.10'
ssh_args:
description: Arguments to pass to all SSH CLI tools.
default: '-C -o ControlMaster=auto -o ControlPersist=60s'
ini:
- section: 'ssh_connection'
key: 'ssh_args'
env:
- name: ANSIBLE_SSH_ARGS
vars:
- name: ansible_ssh_args
version_added: '2.7'
ssh_common_args:
description: Common extra args for all SSH CLI tools.
ini:
- section: 'ssh_connection'
key: 'ssh_common_args'
version_added: '2.7'
env:
- name: ANSIBLE_SSH_COMMON_ARGS
version_added: '2.7'
vars:
- name: ansible_ssh_common_args
cli:
- name: ssh_common_args
default: ''
ssh_executable:
default: ssh
description:
- This defines the location of the SSH binary. It defaults to C(ssh) which will use the first SSH binary available in $PATH.
- This option is usually not required, it might be useful when access to system SSH is restricted,
or when using SSH wrappers to connect to remote hosts.
env: [{name: ANSIBLE_SSH_EXECUTABLE}]
ini:
- {key: ssh_executable, section: ssh_connection}
#const: ANSIBLE_SSH_EXECUTABLE
version_added: "2.2"
vars:
- name: ansible_ssh_executable
version_added: '2.7'
sftp_executable:
default: sftp
description:
- This defines the location of the sftp binary. It defaults to C(sftp) which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SFTP_EXECUTABLE}]
ini:
- {key: sftp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_sftp_executable
version_added: '2.7'
scp_executable:
default: scp
description:
- This defines the location of the scp binary. It defaults to C(scp) which will use the first binary available in $PATH.
env: [{name: ANSIBLE_SCP_EXECUTABLE}]
ini:
- {key: scp_executable, section: ssh_connection}
version_added: "2.6"
vars:
- name: ansible_scp_executable
version_added: '2.7'
scp_extra_args:
description: Extra exclusive to the C(scp) CLI
vars:
- name: ansible_scp_extra_args
env:
- name: ANSIBLE_SCP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: scp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: scp_extra_args
default: ''
sftp_extra_args:
description: Extra exclusive to the C(sftp) CLI
vars:
- name: ansible_sftp_extra_args
env:
- name: ANSIBLE_SFTP_EXTRA_ARGS
version_added: '2.7'
ini:
- key: sftp_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: sftp_extra_args
default: ''
ssh_extra_args:
description: Extra exclusive to the SSH CLI.
vars:
- name: ansible_ssh_extra_args
env:
- name: ANSIBLE_SSH_EXTRA_ARGS
version_added: '2.7'
ini:
- key: ssh_extra_args
section: ssh_connection
version_added: '2.7'
cli:
- name: ssh_extra_args
default: ''
reconnection_retries:
description:
- Number of attempts to connect.
- Ansible retries connections only if it gets an SSH error with a return code of 255.
- Any errors with return codes other than 255 indicate an issue with program execution.
default: 0
type: integer
env:
- name: ANSIBLE_SSH_RETRIES
ini:
- section: connection
key: retries
- section: ssh_connection
key: retries
vars:
- name: ansible_ssh_retries
version_added: '2.7'
port:
description: Remote port to connect to.
type: int
ini:
- section: defaults
key: remote_port
env:
- name: ANSIBLE_REMOTE_PORT
vars:
- name: ansible_port
- name: ansible_ssh_port
keyword:
- name: port
remote_user:
description:
- User name with which to login to the remote server, normally set by the remote_user keyword.
- If no user is supplied, Ansible will let the SSH client binary choose the user as it normally.
ini:
- section: defaults
key: remote_user
env:
- name: ANSIBLE_REMOTE_USER
vars:
- name: ansible_user
- name: ansible_ssh_user
cli:
- name: user
keyword:
- name: remote_user
pipelining:
env:
- name: ANSIBLE_PIPELINING
- name: ANSIBLE_SSH_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
- section: ssh_connection
key: pipelining
vars:
- name: ansible_pipelining
- name: ansible_ssh_pipelining
private_key_file:
description:
- Path to private key file to use for authentication.
ini:
- section: defaults
key: private_key_file
env:
- name: ANSIBLE_PRIVATE_KEY_FILE
vars:
- name: ansible_private_key_file
- name: ansible_ssh_private_key_file
cli:
- name: private_key_file
option: '--private-key'
control_path:
description:
- This is the location to save SSH's ControlPath sockets, it uses SSH's variable substitution.
- Since 2.3, if null (default), ansible will generate a unique hash. Use ``%(directory)s`` to indicate where to use the control dir path setting.
- Before 2.3 it defaulted to ``control_path=%(directory)s/ansible-ssh-%%h-%%p-%%r``.
- Be aware that this setting is ignored if C(-o ControlPath) is set in ssh args.
env:
- name: ANSIBLE_SSH_CONTROL_PATH
ini:
- key: control_path
section: ssh_connection
vars:
- name: ansible_control_path
version_added: '2.7'
control_path_dir:
default: ~/.ansible/cp
description:
- This sets the directory to use for ssh control path if the control path setting is null.
- Also, provides the ``%(directory)s`` variable for the control path setting.
env:
- name: ANSIBLE_SSH_CONTROL_PATH_DIR
ini:
- section: ssh_connection
key: control_path_dir
vars:
- name: ansible_control_path_dir
version_added: '2.7'
sftp_batch_mode:
default: 'yes'
description: 'TODO: write it'
env: [{name: ANSIBLE_SFTP_BATCH_MODE}]
ini:
- {key: sftp_batch_mode, section: ssh_connection}
type: bool
vars:
- name: ansible_sftp_batch_mode
version_added: '2.7'
ssh_transfer_method:
description:
- "Preferred method to use when transferring files over ssh"
- Setting to 'smart' (default) will try them in order, until one succeeds or they all fail
- Using 'piped' creates an ssh pipe with C(dd) on either side to copy the data
choices: ['sftp', 'scp', 'piped', 'smart']
env: [{name: ANSIBLE_SSH_TRANSFER_METHOD}]
ini:
- {key: transfer_method, section: ssh_connection}
vars:
- name: ansible_ssh_transfer_method
version_added: '2.12'
scp_if_ssh:
deprecated:
why: In favor of the "ssh_transfer_method" option.
version: "2.17"
alternatives: ssh_transfer_method
default: smart
description:
- "Preferred method to use when transferring files over SSH."
- When set to I(smart), Ansible will try them until one succeeds or they all fail.
- If set to I(True), it will force 'scp', if I(False) it will use 'sftp'.
- This setting will overridden by ssh_transfer_method if set.
env: [{name: ANSIBLE_SCP_IF_SSH}]
ini:
- {key: scp_if_ssh, section: ssh_connection}
vars:
- name: ansible_scp_if_ssh
version_added: '2.7'
use_tty:
version_added: '2.5'
default: 'yes'
description: add -tt to ssh commands to force tty allocation.
env: [{name: ANSIBLE_SSH_USETTY}]
ini:
- {key: usetty, section: ssh_connection}
type: bool
vars:
- name: ansible_ssh_use_tty
version_added: '2.7'
timeout:
default: 10
description:
- This is the default amount of time we will wait while establishing an SSH connection.
- It also controls how long we can wait to access reading the connection once established (select on the socket).
env:
- name: ANSIBLE_TIMEOUT
- name: ANSIBLE_SSH_TIMEOUT
version_added: '2.11'
ini:
- key: timeout
section: defaults
- key: timeout
section: ssh_connection
version_added: '2.11'
vars:
- name: ansible_ssh_timeout
version_added: '2.11'
cli:
- name: timeout
type: integer
pkcs11_provider:
version_added: '2.12'
default: ""
description:
- "PKCS11 SmartCard provider such as opensc, example: /usr/local/lib/opensc-pkcs11.so"
- Requires sshpass version 1.06+, sshpass must support the -P option.
env: [{name: ANSIBLE_PKCS11_PROVIDER}]
ini:
- {key: pkcs11_provider, section: ssh_connection}
vars:
- name: ansible_ssh_pkcs11_provider
'''
import errno
import fcntl
import hashlib
import os
import pty
import re
import shlex
import subprocess
import time
from functools import wraps
from ansible.errors import (
AnsibleAuthenticationFailure,
AnsibleConnectionFailure,
AnsibleError,
AnsibleFileNotFound,
)
from ansible.errors import AnsibleOptionsError
from ansible.module_utils.compat import selectors
from ansible.module_utils.six import PY3, text_type, binary_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.parsing.convert_bool import BOOLEANS, boolean
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.shell.powershell import _parse_clixml
from ansible.utils.display import Display
from ansible.utils.path import unfrackpath, makedirs_safe
display = Display()
# error messages that indicate 255 return code is not from ssh itself.
b_NOT_SSH_ERRORS = (b'Traceback (most recent call last):', # Python-2.6 when there's an exception
# while invoking a script via -m
b'PHP Parse error:', # Php always returns with error
b'chmod: invalid mode', # chmod, but really only on AIX
b'chmod: A flag or octal number is not correct.', # chmod, other AIX
)
SSHPASS_AVAILABLE = None
SSH_DEBUG = re.compile(r'^debug\d+: .*')
class AnsibleControlPersistBrokenPipeError(AnsibleError):
''' ControlPersist broken pipe '''
pass
def _handle_error(remaining_retries, command, return_tuple, no_log, host, display=display):
# sshpass errors
if command == b'sshpass':
# Error 5 is invalid/incorrect password. Raise an exception to prevent retries from locking the account.
if return_tuple[0] == 5:
msg = 'Invalid/incorrect username/password. Skipping remaining {0} retries to prevent account lockout:'.format(remaining_retries)
if remaining_retries <= 0:
msg = 'Invalid/incorrect password:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleAuthenticationFailure(msg)
# sshpass returns codes are 1-6. We handle 5 previously, so this catches other scenarios.
# No exception is raised, so the connection is retried - except when attempting to use
# sshpass_prompt with an sshpass that won't let us pass -P, in which case we fail loudly.
elif return_tuple[0] in [1, 2, 3, 4, 6]:
msg = 'sshpass error:'
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
details = to_native(return_tuple[2]).rstrip()
if "sshpass: invalid option -- 'P'" in details:
details = 'Installed sshpass version does not support customized password prompts. ' \
'Upgrade sshpass to use sshpass_prompt, or otherwise switch to ssh keys.'
raise AnsibleError('{0} {1}'.format(msg, details))
msg = '{0} {1}'.format(msg, details)
if return_tuple[0] == 255:
SSH_ERROR = True
for signature in b_NOT_SSH_ERRORS:
# 1 == stout, 2 == stderr
if signature in return_tuple[1] or signature in return_tuple[2]:
SSH_ERROR = False
break
if SSH_ERROR:
msg = "Failed to connect to the host via ssh:"
if no_log:
msg = '{0} <error censored due to no log>'.format(msg)
else:
msg = '{0} {1}'.format(msg, to_native(return_tuple[2]).rstrip())
raise AnsibleConnectionFailure(msg)
# For other errors, no exception is raised so the connection is retried and we only log the messages
if 1 <= return_tuple[0] <= 254:
msg = u"Failed to connect to the host via ssh:"
if no_log:
msg = u'{0} <error censored due to no log>'.format(msg)
else:
msg = u'{0} {1}'.format(msg, to_text(return_tuple[2]).rstrip())
display.vvv(msg, host=host)
def _ssh_retry(func):
"""
Decorator to retry ssh/scp/sftp in the case of a connection failure
Will retry if:
* an exception is caught
* ssh returns 255
Will not retry if
* sshpass returns 5 (invalid password, to prevent account lockouts)
* remaining_tries is < 2
* retries limit reached
"""
@wraps(func)
def wrapped(self, *args, **kwargs):
remaining_tries = int(self.get_option('reconnection_retries')) + 1
cmd_summary = u"%s..." % to_text(args[0])
conn_password = self.get_option('password') or self._play_context.password
for attempt in range(remaining_tries):
cmd = args[0]
if attempt != 0 and conn_password and isinstance(cmd, list):
# If this is a retry, the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
try:
try:
return_tuple = func(self, *args, **kwargs)
# TODO: this should come from task
if self._play_context.no_log:
display.vvv(u'rc=%s, stdout and stderr censored due to no log' % return_tuple[0], host=self.host)
else:
display.vvv(return_tuple, host=self.host)
# 0 = success
# 1-254 = remote command return code
# 255 could be a failure from the ssh command itself
except (AnsibleControlPersistBrokenPipeError):
# Retry one more time because of the ControlPersist broken pipe (see #16731)
cmd = args[0]
if conn_password and isinstance(cmd, list):
# This is a retry, so the fd/pipe for sshpass is closed, and we need a new one
self.sshpass_pipe = os.pipe()
cmd[1] = b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')
display.vvv(u"RETRYING BECAUSE OF CONTROLPERSIST BROKEN PIPE")
return_tuple = func(self, *args, **kwargs)
remaining_retries = remaining_tries - attempt - 1
_handle_error(remaining_retries, cmd[0], return_tuple, self._play_context.no_log, self.host)
break
# 5 = Invalid/incorrect password from sshpass
except AnsibleAuthenticationFailure:
# Raising this exception, which is subclassed from AnsibleConnectionFailure, prevents further retries
raise
except (AnsibleConnectionFailure, Exception) as e:
if attempt == remaining_tries - 1:
raise
else:
pause = 2 ** attempt - 1
if pause > 30:
pause = 30
if isinstance(e, AnsibleConnectionFailure):
msg = u"ssh_retry: attempt: %d, ssh return code is 255. cmd (%s), pausing for %d seconds" % (attempt + 1, cmd_summary, pause)
else:
msg = (u"ssh_retry: attempt: %d, caught exception(%s) from cmd (%s), "
u"pausing for %d seconds" % (attempt + 1, to_text(e), cmd_summary, pause))
display.vv(msg, host=self.host)
time.sleep(pause)
continue
return return_tuple
return wrapped
class Connection(ConnectionBase):
''' ssh based connections '''
transport = 'ssh'
has_pipelining = True
def __init__(self, *args, **kwargs):
super(Connection, self).__init__(*args, **kwargs)
# TODO: all should come from get_option(), but not might be set at this point yet
self.host = self._play_context.remote_addr
self.port = self._play_context.port
self.user = self._play_context.remote_user
self.control_path = None
self.control_path_dir = None
# Windows operates differently from a POSIX connection/shell plugin,
# we need to set various properties to ensure SSH on Windows continues
# to work
if getattr(self._shell, "_IS_WINDOWS", False):
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = ('.ps1', '.exe', '')
self.allow_executable = False
# The connection is created by running ssh/scp/sftp from the exec_command,
# put_file, and fetch_file methods, so we don't need to do any connection
# management here.
def _connect(self):
return self
@staticmethod
def _create_control_path(host, port, user, connection=None, pid=None):
'''Make a hash for the controlpath based on con attributes'''
pstring = '%s-%s-%s' % (host, port, user)
if connection:
pstring += '-%s' % connection
if pid:
pstring += '-%s' % to_text(pid)
m = hashlib.sha1()
m.update(to_bytes(pstring))
digest = m.hexdigest()
cpath = '%(directory)s/' + digest[:10]
return cpath
@staticmethod
def _sshpass_available():
global SSHPASS_AVAILABLE
# We test once if sshpass is available, and remember the result. It
# would be nice to use distutils.spawn.find_executable for this, but
# distutils isn't always available; shutils.which() is Python3-only.
if SSHPASS_AVAILABLE is None:
try:
p = subprocess.Popen(["sshpass"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.communicate()
SSHPASS_AVAILABLE = True
except OSError:
SSHPASS_AVAILABLE = False
return SSHPASS_AVAILABLE
@staticmethod
def _persistence_controls(b_command):
'''
Takes a command array and scans it for ControlPersist and ControlPath
settings and returns two booleans indicating whether either was found.
This could be smarter, e.g. returning false if ControlPersist is 'no',
but for now we do it simple way.
'''
controlpersist = False
controlpath = False
for b_arg in (a.lower() for a in b_command):
if b'controlpersist' in b_arg:
controlpersist = True
elif b'controlpath' in b_arg:
controlpath = True
return controlpersist, controlpath
def _add_args(self, b_command, b_args, explanation):
"""
Adds arguments to the ssh command and displays a caller-supplied explanation of why.
:arg b_command: A list containing the command to add the new arguments to.
This list will be modified by this method.
:arg b_args: An iterable of new arguments to add. This iterable is used
more than once so it must be persistent (ie: a list is okay but a
StringIO would not)
:arg explanation: A text string containing explaining why the arguments
were added. It will be displayed with a high enough verbosity.
.. note:: This function does its work via side-effect. The b_command list has the new arguments appended.
"""
display.vvvvv(u'SSH: %s: (%s)' % (explanation, ')('.join(to_text(a) for a in b_args)), host=self.host)
b_command += b_args
def _build_command(self, binary, subsystem, *other_args):
'''
Takes a executable (ssh, scp, sftp or wrapper) and optional extra arguments and returns the remote command
wrapped in local ssh shell commands and ready for execution.
:arg binary: actual executable to use to execute command.
:arg subsystem: type of executable provided, ssh/sftp/scp, needed because wrappers for ssh might have diff names.
:arg other_args: dict of, value pairs passed as arguments to the ssh binary
'''
b_command = []
conn_password = self.get_option('password') or self._play_context.password
#
# First, the command to invoke
#
# If we want to use password authentication, we have to set up a pipe to
# write the password to sshpass.
pkcs11_provider = self.get_option("pkcs11_provider")
if conn_password or pkcs11_provider:
if not self._sshpass_available():
raise AnsibleError("to use the 'ssh' connection type with passwords or pkcs11_provider, you must install the sshpass program")
if not conn_password and pkcs11_provider:
raise AnsibleError("to use pkcs11_provider you must specify a password/pin")
self.sshpass_pipe = os.pipe()
b_command += [b'sshpass', b'-d' + to_bytes(self.sshpass_pipe[0], nonstring='simplerepr', errors='surrogate_or_strict')]
password_prompt = self.get_option('sshpass_prompt')
if not password_prompt and pkcs11_provider:
# Set default password prompt for pkcs11_provider to make it clear its a PIN
password_prompt = 'Enter PIN for '
if password_prompt:
b_command += [b'-P', to_bytes(password_prompt, errors='surrogate_or_strict')]
b_command += [to_bytes(binary, errors='surrogate_or_strict')]
#
# Next, additional arguments based on the configuration.
#
# pkcs11 mode allows the use of Smartcards or Yubikey devices
if conn_password and pkcs11_provider:
self._add_args(b_command,
(b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=publickey",
b"-o", b"PasswordAuthentication=no",
b'-o', to_bytes(u'PKCS11Provider=%s' % pkcs11_provider)),
u'Enable pkcs11')
# sftp batch mode allows us to correctly catch failed transfers, but can
# be disabled if the client side doesn't support the option. However,
# sftp batch mode does not prompt for passwords so it must be disabled
# if not using controlpersist and using sshpass
if subsystem == 'sftp' and self.get_option('sftp_batch_mode'):
if conn_password:
b_args = [b'-o', b'BatchMode=no']
self._add_args(b_command, b_args, u'disable batch mode for sshpass')
b_command += [b'-b', b'-']
if display.verbosity > 3:
b_command.append(b'-vvv')
# Next, we add ssh_args
ssh_args = self.get_option('ssh_args')
if ssh_args:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in
self._split_ssh_args(ssh_args)]
self._add_args(b_command, b_args, u"ansible.cfg set ssh_args")
# Now we add various arguments that have their own specific settings defined in docs above.
if self.get_option('host_key_checking') is False:
b_args = (b"-o", b"StrictHostKeyChecking=no")
self._add_args(b_command, b_args, u"ANSIBLE_HOST_KEY_CHECKING/host_key_checking disabled")
self.port = self.get_option('port')
if self.port is not None:
b_args = (b"-o", b"Port=" + to_bytes(self.port, nonstring='simplerepr', errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"ANSIBLE_REMOTE_PORT/remote_port/ansible_port set")
key = self.get_option('private_key_file')
if key:
b_args = (b"-o", b'IdentityFile="' + to_bytes(os.path.expanduser(key), errors='surrogate_or_strict') + b'"')
self._add_args(b_command, b_args, u"ANSIBLE_PRIVATE_KEY_FILE/private_key_file/ansible_ssh_private_key_file set")
if not conn_password:
self._add_args(
b_command, (
b"-o", b"KbdInteractiveAuthentication=no",
b"-o", b"PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey",
b"-o", b"PasswordAuthentication=no"
),
u"ansible_password/ansible_ssh_password not set"
)
self.user = self.get_option('remote_user')
if self.user:
self._add_args(
b_command,
(b"-o", b'User="%s"' % to_bytes(self.user, errors='surrogate_or_strict')),
u"ANSIBLE_REMOTE_USER/remote_user/ansible_user/user/-u set"
)
timeout = self.get_option('timeout')
self._add_args(
b_command,
(b"-o", b"ConnectTimeout=" + to_bytes(timeout, errors='surrogate_or_strict', nonstring='simplerepr')),
u"ANSIBLE_TIMEOUT/timeout set"
)
# Add in any common or binary-specific arguments from the PlayContext
# (i.e. inventory or task settings or overrides on the command line).
for opt in (u'ssh_common_args', u'{0}_extra_args'.format(subsystem)):
attr = self.get_option(opt)
if attr is not None:
b_args = [to_bytes(a, errors='surrogate_or_strict') for a in self._split_ssh_args(attr)]
self._add_args(b_command, b_args, u"Set %s" % opt)
# Check if ControlPersist is enabled and add a ControlPath if one hasn't
# already been set.
controlpersist, controlpath = self._persistence_controls(b_command)
if controlpersist:
self._persistent = True
if not controlpath:
self.control_path_dir = self.get_option('control_path_dir')
cpdir = unfrackpath(self.control_path_dir)
b_cpdir = to_bytes(cpdir, errors='surrogate_or_strict')
# The directory must exist and be writable.
makedirs_safe(b_cpdir, 0o700)
if not os.access(b_cpdir, os.W_OK):
raise AnsibleError("Cannot write to ControlPath %s" % to_native(cpdir))
self.control_path = self.get_option('control_path')
if not self.control_path:
self.control_path = self._create_control_path(
self.host,
self.port,
self.user
)
b_args = (b"-o", b'ControlPath="%s"' % to_bytes(self.control_path % dict(directory=cpdir), errors='surrogate_or_strict'))
self._add_args(b_command, b_args, u"found only ControlPersist; added ControlPath")
# Finally, we add any caller-supplied extras.
if other_args:
b_command += [to_bytes(a) for a in other_args]
return b_command
def _send_initial_data(self, fh, in_data, ssh_process):
'''
Writes initial data to the stdin filehandle of the subprocess and closes
it. (The handle must be closed; otherwise, for example, "sftp -b -" will
just hang forever waiting for more commands.)
'''
display.debug(u'Sending initial data')
try:
fh.write(to_bytes(in_data))
fh.close()
except (OSError, IOError) as e:
# The ssh connection may have already terminated at this point, with a more useful error
# Only raise AnsibleConnectionFailure if the ssh process is still alive
time.sleep(0.001)
ssh_process.poll()
if getattr(ssh_process, 'returncode', None) is None:
raise AnsibleConnectionFailure(
'Data could not be sent to remote host "%s". Make sure this host can be reached '
'over ssh: %s' % (self.host, to_native(e)), orig_exc=e
)
display.debug(u'Sent initial data (%d bytes)' % len(in_data))
# Used by _run() to kill processes on failures
@staticmethod
def _terminate_process(p):
""" Terminate a process, ignoring errors """
try:
p.terminate()
except (OSError, IOError):
pass
# This is separate from _run() because we need to do the same thing for stdout
# and stderr.
def _examine_output(self, source, state, b_chunk, sudoable):
'''
Takes a string, extracts complete lines from it, tests to see if they
are a prompt, error message, etc., and sets appropriate flags in self.
Prompt and success lines are removed.
Returns the processed (i.e. possibly-edited) output and the unprocessed
remainder (to be processed with the next chunk) as strings.
'''
output = []
for b_line in b_chunk.splitlines(True):
display_line = to_text(b_line).rstrip('\r\n')
suppress_output = False
# display.debug("Examining line (source=%s, state=%s): '%s'" % (source, state, display_line))
if SSH_DEBUG.match(display_line):
# skip lines from ssh debug output to avoid false matches
pass
elif self.become.expect_prompt() and self.become.check_password_prompt(b_line):
display.debug(u"become_prompt: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_prompt'] = True
suppress_output = True
elif self.become.success and self.become.check_success(b_line):
display.debug(u"become_success: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_success'] = True
suppress_output = True
elif sudoable and self.become.check_incorrect_password(b_line):
display.debug(u"become_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_error'] = True
elif sudoable and self.become.check_missing_password(b_line):
display.debug(u"become_nopasswd_error: (source=%s, state=%s): '%s'" % (source, state, display_line))
self._flags['become_nopasswd_error'] = True
if not suppress_output:
output.append(b_line)
# The chunk we read was most likely a series of complete lines, but just
# in case the last line was incomplete (and not a prompt, which we would
# have removed from the output), we retain it to be processed with the
# next chunk.
remainder = b''
if output and not output[-1].endswith(b'\n'):
remainder = output[-1]
output = output[:-1]
return b''.join(output), remainder
def _bare_run(self, cmd, in_data, sudoable=True, checkrc=True):
'''
Starts the command and communicates with it until it ends.
'''
# We don't use _shell.quote as this is run on the controller and independent from the shell plugin chosen
display_cmd = u' '.join(shlex.quote(to_text(c)) for c in cmd)
display.vvv(u'SSH: EXEC {0}'.format(display_cmd), host=self.host)
# Start the given command. If we don't need to pipeline data, we can try
# to use a pseudo-tty (ssh will have been invoked with -tt). If we are
# pipelining data, or can't create a pty, we fall back to using plain
# old pipes.
p = None
if isinstance(cmd, (text_type, binary_type)):
cmd = to_bytes(cmd)
else:
cmd = list(map(to_bytes, cmd))
conn_password = self.get_option('password') or self._play_context.password
if not in_data:
try:
# Make sure stdin is a proper pty to avoid tcgetattr errors
master, slave = pty.openpty()
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdin = os.fdopen(master, 'wb', 0)
os.close(slave)
except (OSError, IOError):
p = None
if not p:
try:
if PY3 and conn_password:
# pylint: disable=unexpected-keyword-arg
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE, pass_fds=self.sshpass_pipe)
else:
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
stdin = p.stdin
except (OSError, IOError) as e:
raise AnsibleError('Unable to execute ssh command line on a controller due to: %s' % to_native(e))
# If we are using SSH password authentication, write the password into
# the pipe we opened in _build_command.
if conn_password:
os.close(self.sshpass_pipe[0])
try:
os.write(self.sshpass_pipe[1], to_bytes(conn_password) + b'\n')
except OSError as e:
# Ignore broken pipe errors if the sshpass process has exited.
if e.errno != errno.EPIPE or p.poll() is None:
raise
os.close(self.sshpass_pipe[1])
#
# SSH state machine
#
# Now we read and accumulate output from the running process until it
# exits. Depending on the circumstances, we may also need to write an
# escalation password and/or pipelined input to the process.
states = [
'awaiting_prompt', 'awaiting_escalation', 'ready_to_send', 'awaiting_exit'
]
# Are we requesting privilege escalation? Right now, we may be invoked
# to execute sftp/scp with sudoable=True, but we can request escalation
# only when using ssh. Otherwise we can send initial data straightaway.
state = states.index('ready_to_send')
if to_bytes(self.get_option('ssh_executable')) in cmd and sudoable:
prompt = getattr(self.become, 'prompt', None)
if prompt:
# We're requesting escalation with a password, so we have to
# wait for a password prompt.
state = states.index('awaiting_prompt')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(prompt)))
elif self.become and self.become.success:
# We're requesting escalation without a password, so we have to
# detect success/failure before sending any initial data.
state = states.index('awaiting_escalation')
display.debug(u'Initial state: %s: %s' % (states[state], to_text(self.become.success)))
# We store accumulated stdout and stderr output from the process here,
# but strip any privilege escalation prompt/confirmation lines first.
# Output is accumulated into tmp_*, complete lines are extracted into
# an array, then checked and removed or copied to stdout or stderr. We
# set any flags based on examining the output in self._flags.
b_stdout = b_stderr = b''
b_tmp_stdout = b_tmp_stderr = b''
self._flags = dict(
become_prompt=False, become_success=False,
become_error=False, become_nopasswd_error=False
)
# select timeout should be longer than the connect timeout, otherwise
# they will race each other when we can't connect, and the connect
# timeout usually fails
timeout = 2 + self.get_option('timeout')
for fd in (p.stdout, p.stderr):
fcntl.fcntl(fd, fcntl.F_SETFL, fcntl.fcntl(fd, fcntl.F_GETFL) | os.O_NONBLOCK)
# TODO: bcoca would like to use SelectSelector() when open
# select is faster when filehandles is low and we only ever handle 1.
selector = selectors.DefaultSelector()
selector.register(p.stdout, selectors.EVENT_READ)
selector.register(p.stderr, selectors.EVENT_READ)
# If we can send initial data without waiting for anything, we do so
# before we start polling
if states[state] == 'ready_to_send' and in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
try:
while True:
poll = p.poll()
events = selector.select(timeout)
# We pay attention to timeouts only while negotiating a prompt.
if not events:
# We timed out
if state <= states.index('awaiting_escalation'):
# If the process has already exited, then it's not really a
# timeout; we'll let the normal error handling deal with it.
if poll is not None:
break
self._terminate_process(p)
raise AnsibleError('Timeout (%ds) waiting for privilege escalation prompt: %s' % (timeout, to_native(b_stdout)))
# Read whatever output is available on stdout and stderr, and stop
# listening to the pipe if it's been closed.
for key, event in events:
if key.fileobj == p.stdout:
b_chunk = p.stdout.read()
if b_chunk == b'':
# stdout has been closed, stop watching it
selector.unregister(p.stdout)
# When ssh has ControlMaster (+ControlPath/Persist) enabled, the
# first connection goes into the background and we never see EOF
# on stderr. If we see EOF on stdout, lower the select timeout
# to reduce the time wasted selecting on stderr if we observe
# that the process has not yet existed after this EOF. Otherwise
# we may spend a long timeout period waiting for an EOF that is
# not going to arrive until the persisted connection closes.
timeout = 1
b_tmp_stdout += b_chunk
display.debug(u"stdout chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
elif key.fileobj == p.stderr:
b_chunk = p.stderr.read()
if b_chunk == b'':
# stderr has been closed, stop watching it
selector.unregister(p.stderr)
b_tmp_stderr += b_chunk
display.debug("stderr chunk (state=%s):\n>>>%s<<<\n" % (state, to_text(b_chunk)))
# We examine the output line-by-line until we have negotiated any
# privilege escalation prompt and subsequent success/error message.
# Afterwards, we can accumulate output without looking at it.
if state < states.index('ready_to_send'):
if b_tmp_stdout:
b_output, b_unprocessed = self._examine_output('stdout', states[state], b_tmp_stdout, sudoable)
b_stdout += b_output
b_tmp_stdout = b_unprocessed
if b_tmp_stderr:
b_output, b_unprocessed = self._examine_output('stderr', states[state], b_tmp_stderr, sudoable)
b_stderr += b_output
b_tmp_stderr = b_unprocessed
else:
b_stdout += b_tmp_stdout
b_stderr += b_tmp_stderr
b_tmp_stdout = b_tmp_stderr = b''
# If we see a privilege escalation prompt, we send the password.
# (If we're expecting a prompt but the escalation succeeds, we
# didn't need the password and can carry on regardless.)
if states[state] == 'awaiting_prompt':
if self._flags['become_prompt']:
display.debug(u'Sending become_password in response to prompt')
become_pass = self.become.get_option('become_pass', playcontext=self._play_context)
stdin.write(to_bytes(become_pass, errors='surrogate_or_strict') + b'\n')
# On python3 stdin is a BufferedWriter, and we don't have a guarantee
# that the write will happen without a flush
stdin.flush()
self._flags['become_prompt'] = False
state += 1
elif self._flags['become_success']:
state += 1
# We've requested escalation (with or without a password), now we
# wait for an error message or a successful escalation.
if states[state] == 'awaiting_escalation':
if self._flags['become_success']:
display.vvv(u'Escalation succeeded')
self._flags['become_success'] = False
state += 1
elif self._flags['become_error']:
display.vvv(u'Escalation failed')
self._terminate_process(p)
self._flags['become_error'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
elif self._flags['become_nopasswd_error']:
display.vvv(u'Escalation requires password')
self._terminate_process(p)
self._flags['become_nopasswd_error'] = False
raise AnsibleError('Missing %s password' % self.become.name)
elif self._flags['become_prompt']:
# This shouldn't happen, because we should see the "Sorry,
# try again" message first.
display.vvv(u'Escalation prompt repeated')
self._terminate_process(p)
self._flags['become_prompt'] = False
raise AnsibleError('Incorrect %s password' % self.become.name)
# Once we're sure that the privilege escalation prompt, if any, has
# been dealt with, we can send any initial data and start waiting
# for output.
if states[state] == 'ready_to_send':
if in_data:
self._send_initial_data(stdin, in_data, p)
state += 1
# Now we're awaiting_exit: has the child process exited? If it has,
# and we've read all available output from it, we're done.
if poll is not None:
if not selector.get_map() or not events:
break
# We should not see further writes to the stdout/stderr file
# descriptors after the process has closed, set the select
# timeout to gather any last writes we may have missed.
timeout = 0
continue
# If the process has not yet exited, but we've already read EOF from
# its stdout and stderr (and thus no longer watching any file
# descriptors), we can just wait for it to exit.
elif not selector.get_map():
p.wait()
break
# Otherwise there may still be outstanding data to read.
finally:
selector.close()
# close stdin, stdout, and stderr after process is terminated and
# stdout/stderr are read completely (see also issues #848, #64768).
stdin.close()
p.stdout.close()
p.stderr.close()
if self.get_option('host_key_checking'):
if cmd[0] == b"sshpass" and p.returncode == 6:
raise AnsibleError('Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support '
'this. Please add this host\'s fingerprint to your known_hosts file to manage this host.')
controlpersisterror = b'Bad configuration option: ControlPersist' in b_stderr or b'unknown configuration option: ControlPersist' in b_stderr
if p.returncode != 0 and controlpersisterror:
raise AnsibleError('using -c ssh on certain older ssh versions may not support ControlPersist, set ANSIBLE_SSH_ARGS="" '
'(or ssh_args in [ssh_connection] section of the config file) before running again')
# If we find a broken pipe because of ControlPersist timeout expiring (see #16731),
# we raise a special exception so that we can retry a connection.
controlpersist_broken_pipe = b'mux_client_hello_exchange: write packet: Broken pipe' in b_stderr
if p.returncode == 255:
additional = to_native(b_stderr)
if controlpersist_broken_pipe:
raise AnsibleControlPersistBrokenPipeError('Data could not be sent because of ControlPersist broken pipe: %s' % additional)
elif in_data and checkrc:
raise AnsibleConnectionFailure('Data could not be sent to remote host "%s". Make sure this host can be reached over ssh: %s'
% (self.host, additional))
return (p.returncode, b_stdout, b_stderr)
@_ssh_retry
def _run(self, cmd, in_data, sudoable=True, checkrc=True):
"""Wrapper around _bare_run that retries the connection
"""
return self._bare_run(cmd, in_data, sudoable=sudoable, checkrc=checkrc)
@_ssh_retry
def _file_transport_command(self, in_path, out_path, sftp_action):
# scp and sftp require square brackets for IPv6 addresses, but
# accept them for hostnames and IPv4 addresses too.
host = '[%s]' % self.host
smart_methods = ['sftp', 'scp', 'piped']
# Windows does not support dd so we cannot use the piped method
if getattr(self._shell, "_IS_WINDOWS", False):
smart_methods.remove('piped')
# Transfer methods to try
methods = []
# Use the transfer_method option if set, otherwise use scp_if_ssh
ssh_transfer_method = self.get_option('ssh_transfer_method')
scp_if_ssh = self.get_option('scp_if_ssh')
if ssh_transfer_method is None and scp_if_ssh == 'smart':
ssh_transfer_method = 'smart'
if ssh_transfer_method is not None:
if ssh_transfer_method == 'smart':
methods = smart_methods
else:
methods = [ssh_transfer_method]
else:
# since this can be a non-bool now, we need to handle it correctly
if not isinstance(scp_if_ssh, bool):
scp_if_ssh = scp_if_ssh.lower()
if scp_if_ssh in BOOLEANS:
scp_if_ssh = boolean(scp_if_ssh, strict=False)
elif scp_if_ssh != 'smart':
raise AnsibleOptionsError('scp_if_ssh needs to be one of [smart|True|False]')
if scp_if_ssh == 'smart':
methods = smart_methods
elif scp_if_ssh is True:
methods = ['scp']
else:
methods = ['sftp']
for method in methods:
returncode = stdout = stderr = None
if method == 'sftp':
cmd = self._build_command(self.get_option('sftp_executable'), 'sftp', to_bytes(host))
in_data = u"{0} {1} {2}\n".format(sftp_action, shlex.quote(in_path), shlex.quote(out_path))
in_data = to_bytes(in_data, nonstring='passthru')
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'scp':
scp = self.get_option('scp_executable')
if sftp_action == 'get':
cmd = self._build_command(scp, 'scp', u'{0}:{1}'.format(host, self._shell.quote(in_path)), out_path)
else:
cmd = self._build_command(scp, 'scp', in_path, u'{0}:{1}'.format(host, self._shell.quote(out_path)))
in_data = None
(returncode, stdout, stderr) = self._bare_run(cmd, in_data, checkrc=False)
elif method == 'piped':
if sftp_action == 'get':
# we pass sudoable=False to disable pty allocation, which
# would end up mixing stdout/stderr and screwing with newlines
(returncode, stdout, stderr) = self.exec_command('dd if=%s bs=%s' % (in_path, BUFSIZE), sudoable=False)
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
out_file.write(stdout)
else:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as f:
in_data = to_bytes(f.read(), nonstring='passthru')
if not in_data:
count = ' count=0'
else:
count = ''
(returncode, stdout, stderr) = self.exec_command('dd of=%s bs=%s%s' % (out_path, BUFSIZE, count), in_data=in_data, sudoable=False)
# Check the return code and rollover to next method if failed
if returncode == 0:
return (returncode, stdout, stderr)
else:
# If not in smart mode, the data will be printed by the raise below
if len(methods) > 1:
display.warning(u'%s transfer mechanism failed on %s. Use ANSIBLE_DEBUG=1 to see detailed information' % (method, host))
display.debug(u'%s' % to_text(stdout))
display.debug(u'%s' % to_text(stderr))
if returncode == 255:
raise AnsibleConnectionFailure("Failed to connect to the host via %s: %s" % (method, to_native(stderr)))
else:
raise AnsibleError("failed to transfer file to %s %s:\n%s\n%s" %
(to_native(in_path), to_native(out_path), to_native(stdout), to_native(stderr)))
def _escape_win_path(self, path):
""" converts a Windows path to one that's supported by SFTP and SCP """
# If using a root path then we need to start with /
prefix = ""
if re.match(r'^\w{1}:', path):
prefix = "/"
# Convert all '\' to '/'
return "%s%s" % (prefix, path.replace("\\", "/"))
#
# Main public methods
#
def exec_command(self, cmd, in_data=None, sudoable=True):
''' run a command on the remote host '''
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"ESTABLISH SSH CONNECTION FOR USER: {0}".format(self.user), host=self.host)
if getattr(self._shell, "_IS_WINDOWS", False):
# Become method 'runas' is done in the wrapper that is executed,
# need to disable sudoable so the bare_run is not waiting for a
# prompt that will not occur
sudoable = False
# Make sure our first command is to set the console encoding to
# utf-8, this must be done via chcp to get utf-8 (65001)
cmd_parts = ["chcp.com", "65001", self._shell._SHELL_REDIRECT_ALLNULL, self._shell._SHELL_AND]
cmd_parts.extend(self._shell._encode_script(cmd, as_list=True, strict_mode=False, preserve_rc=False))
cmd = ' '.join(cmd_parts)
# we can only use tty when we are not pipelining the modules. piping
# data into /usr/bin/python inside a tty automatically invokes the
# python interactive-mode but the modules are not compatible with the
# interactive-mode ("unexpected indent" mainly because of empty lines)
ssh_executable = self.get_option('ssh_executable')
# -tt can cause various issues in some environments so allow the user
# to disable it as a troubleshooting method.
use_tty = self.get_option('use_tty')
if not in_data and sudoable and use_tty:
args = ('-tt', self.host, cmd)
else:
args = (self.host, cmd)
cmd = self._build_command(ssh_executable, 'ssh', *args)
(returncode, stdout, stderr) = self._run(cmd, in_data, sudoable=sudoable)
# When running on Windows, stderr may contain CLIXML encoded output
if getattr(self._shell, "_IS_WINDOWS", False) and stderr.startswith(b"#< CLIXML"):
stderr = _parse_clixml(stderr)
return (returncode, stdout, stderr)
def put_file(self, in_path, out_path):
''' transfer a file from local to remote '''
super(Connection, self).put_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"PUT {0} TO {1}".format(in_path, out_path), host=self.host)
if not os.path.exists(to_bytes(in_path, errors='surrogate_or_strict')):
raise AnsibleFileNotFound("file or module does not exist: {0}".format(to_native(in_path)))
if getattr(self._shell, "_IS_WINDOWS", False):
out_path = self._escape_win_path(out_path)
return self._file_transport_command(in_path, out_path, 'put')
def fetch_file(self, in_path, out_path):
''' fetch a file from remote to local '''
super(Connection, self).fetch_file(in_path, out_path)
self.host = self.get_option('host') or self._play_context.remote_addr
display.vvv(u"FETCH {0} TO {1}".format(in_path, out_path), host=self.host)
# need to add / if path is rooted
if getattr(self._shell, "_IS_WINDOWS", False):
in_path = self._escape_win_path(in_path)
return self._file_transport_command(in_path, out_path, 'get')
def reset(self):
run_reset = False
self.host = self.get_option('host') or self._play_context.remote_addr
# If we have a persistent ssh connection (ControlPersist), we can ask it to stop listening.
# only run the reset if the ControlPath already exists or if it isn't configured and ControlPersist is set
# 'check' will determine this.
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'check', self.host)
display.vvv(u'sending connection check: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.vvv(u"No connection to reset: %s" % to_text(stderr))
else:
run_reset = True
if run_reset:
cmd = self._build_command(self.get_option('ssh_executable'), 'ssh', '-O', 'stop', self.host)
display.vvv(u'sending connection stop: %s' % to_text(cmd))
p = subprocess.Popen(cmd, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
stdout, stderr = p.communicate()
status_code = p.wait()
if status_code != 0:
display.warning(u"Failed to reset connection:%s" % to_text(stderr))
self.close()
def close(self):
self._connected = False
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,564 |
check_mode for unarchive module is not supported for gzipped tar files
|
### Summary
The documentation on https://docs.ansible.com/ansible/latest/collections/ansible/builtin/unarchive_module.html states that the module fully supports check-mode.
But the fact is that check-mode is not supported when it concerns a gzipped tar file cfr. https://github.com/ansible/ansible-modules-core/blob/00911a75ad6635834b6d28eef41f197b2f73c381/files/unarchive.py#L591
I was mislead by the '_fully_ supported' and noticed that the step was skipped in check-mode.
Wouldn't it make sense to adapt the documentation accordingly ?
### Issue Type
Documentation Report
### Component Name
https://github.com/ansible/ansible-modules-core/blob/00911a75ad6635834b6d28eef41f197b2f73c381/files/unarchive.py
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = ansible.cfg
configured module search path = ['/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /home/<user>/.ansible
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 2.11.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Debian 11
### Additional Information
When the documentation is adapted, it is clear for everyone that unarchiving gzipped tar files is not check_mode compatible.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78564
|
https://github.com/ansible/ansible/pull/78741
|
952ee88f33de4d49ff5f7bd4bec3431a4b0fdc78
|
f50ff1c2dbb2eee88b2ac9e50e9f13d942e41f12
| 2022-08-16T20:34:10Z |
python
| 2022-09-15T15:06:31Z |
lib/ansible/modules/unarchive.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Michael DeHaan <[email protected]>
# Copyright: (c) 2013, Dylan Martin <[email protected]>
# Copyright: (c) 2015, Toshio Kuratomi <[email protected]>
# Copyright: (c) 2016, Dag Wieers <[email protected]>
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = r'''
---
module: unarchive
version_added: '1.4'
short_description: Unpacks an archive after (optionally) copying it from the local machine
description:
- The C(unarchive) module unpacks an archive. It will not unpack a compressed file that does not contain an archive.
- By default, it will copy the source file from the local system to the target before unpacking.
- Set C(remote_src=yes) to unpack an archive which already exists on the target.
- If checksum validation is desired, use M(ansible.builtin.get_url) or M(ansible.builtin.uri) instead to fetch the file and set C(remote_src=yes).
- For Windows targets, use the M(community.windows.win_unzip) module instead.
options:
src:
description:
- If C(remote_src=no) (default), local path to archive file to copy to the target server; can be absolute or relative. If C(remote_src=yes), path on the
target server to existing archive file to unpack.
- If C(remote_src=yes) and C(src) contains C(://), the remote machine will download the file from the URL first. (version_added 2.0). This is only for
simple cases, for full download support use the M(ansible.builtin.get_url) module.
type: path
required: true
dest:
description:
- Remote absolute path where the archive should be unpacked.
- The given path must exist. Base directory is not created by this module.
type: path
required: true
copy:
description:
- If true, the file is copied from local controller to the managed (remote) node, otherwise, the plugin will look for src archive on the managed machine.
- This option has been deprecated in favor of C(remote_src).
- This option is mutually exclusive with C(remote_src).
type: bool
default: yes
creates:
description:
- If the specified absolute path (file or directory) already exists, this step will B(not) be run.
- The specified absolute path (file or directory) must be below the base path given with C(dest:).
type: path
version_added: "1.6"
io_buffer_size:
description:
- Size of the volatile memory buffer that is used for extracting files from the archive in bytes.
type: int
default: 65536
version_added: "2.12"
list_files:
description:
- If set to True, return the list of files that are contained in the tarball.
type: bool
default: no
version_added: "2.0"
exclude:
description:
- List the directory and file entries that you would like to exclude from the unarchive action.
- Mutually exclusive with C(include).
type: list
default: []
elements: str
version_added: "2.1"
include:
description:
- List of directory and file entries that you would like to extract from the archive. If C(include)
is not empty, only files listed here will be extracted.
- Mutually exclusive with C(exclude).
type: list
default: []
elements: str
version_added: "2.11"
keep_newer:
description:
- Do not replace existing files that are newer than files from the archive.
type: bool
default: no
version_added: "2.1"
extra_opts:
description:
- Specify additional options by passing in an array.
- Each space-separated command-line option should be a new element of the array. See examples.
- Command-line options with multiple elements must use multiple lines in the array, one for each element.
type: list
elements: str
default: ""
version_added: "2.1"
remote_src:
description:
- Set to C(yes) to indicate the archived file is already on the remote system and not local to the Ansible controller.
- This option is mutually exclusive with C(copy).
type: bool
default: no
version_added: "2.2"
validate_certs:
description:
- This only applies if using a https URL as the source of the file.
- This should only set to C(no) used on personally controlled sites using self-signed certificate.
- Prior to 2.2 the code worked as if this was set to C(yes).
type: bool
default: yes
version_added: "2.2"
extends_documentation_fragment:
- action_common_attributes
- action_common_attributes.flow
- action_common_attributes.files
- decrypt
- files
attributes:
action:
support: full
async:
support: none
bypass_host_loop:
support: none
check_mode:
support: full
diff_mode:
support: partial
details: Uses gtar's C(--diff) arg to calculate if changed or not. If this C(arg) is not supported, it will always unpack the archive.
platform:
platforms: posix
safe_file_operations:
support: none
vault:
support: full
todo:
- Re-implement tar support using native tarfile module.
- Re-implement zip support using native zipfile module.
notes:
- Requires C(zipinfo) and C(gtar)/C(unzip) command on target host.
- Requires C(zstd) command on target host to expand I(.tar.zst) files.
- Can handle I(.zip) files using C(unzip) as well as I(.tar), I(.tar.gz), I(.tar.bz2), I(.tar.xz), and I(.tar.zst) files using C(gtar).
- Does not handle I(.gz) files, I(.bz2) files, I(.xz), or I(.zst) files that do not contain a I(.tar) archive.
- Existing files/directories in the destination which are not in the archive
are not touched. This is the same behavior as a normal archive extraction.
- Existing files/directories in the destination which are not in the archive
are ignored for purposes of deciding if the archive should be unpacked or not.
seealso:
- module: community.general.archive
- module: community.general.iso_extract
- module: community.windows.win_unzip
author: Michael DeHaan
'''
EXAMPLES = r'''
- name: Extract foo.tgz into /var/lib/foo
ansible.builtin.unarchive:
src: foo.tgz
dest: /var/lib/foo
- name: Unarchive a file that is already on the remote machine
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file that needs to be downloaded (added in 2.0)
ansible.builtin.unarchive:
src: https://example.com/example.zip
dest: /usr/local/bin
remote_src: yes
- name: Unarchive a file with extra options
ansible.builtin.unarchive:
src: /tmp/foo.zip
dest: /usr/local/bin
extra_opts:
- --transform
- s/^xxx/yyy/
'''
RETURN = r'''
dest:
description: Path to the destination directory.
returned: always
type: str
sample: /opt/software
files:
description: List of all the files in the archive.
returned: When I(list_files) is True
type: list
sample: '["file1", "file2"]'
gid:
description: Numerical ID of the group that owns the destination directory.
returned: always
type: int
sample: 1000
group:
description: Name of the group that owns the destination directory.
returned: always
type: str
sample: "librarians"
handler:
description: Archive software handler used to extract and decompress the archive.
returned: always
type: str
sample: "TgzArchive"
mode:
description: String that represents the octal permissions of the destination directory.
returned: always
type: str
sample: "0755"
owner:
description: Name of the user that owns the destination directory.
returned: always
type: str
sample: "paul"
size:
description: The size of destination directory in bytes. Does not include the size of files or subdirectories contained within.
returned: always
type: int
sample: 36
src:
description:
- The source archive's path.
- If I(src) was a remote web URL, or from the local ansible controller, this shows the temporary location where the download was stored.
returned: always
type: str
sample: "/home/paul/test.tar.gz"
state:
description: State of the destination. Effectively always "directory".
returned: always
type: str
sample: "directory"
uid:
description: Numerical ID of the user that owns the destination directory.
returned: always
type: int
sample: 1000
'''
import binascii
import codecs
import datetime
import fnmatch
import grp
import os
import platform
import pwd
import re
import stat
import time
import traceback
from functools import partial
from zipfile import ZipFile, BadZipfile
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.urls import fetch_file
try: # python 3.3+
from shlex import quote # type: ignore[attr-defined]
except ImportError: # older python
from pipes import quote
# String from tar that shows the tar contents are different from the
# filesystem
OWNER_DIFF_RE = re.compile(r': Uid differs$')
GROUP_DIFF_RE = re.compile(r': Gid differs$')
MODE_DIFF_RE = re.compile(r': Mode differs$')
MOD_TIME_DIFF_RE = re.compile(r': Mod time differs$')
# NEWER_DIFF_RE = re.compile(r' is newer or same age.$')
EMPTY_FILE_RE = re.compile(r': : Warning: Cannot stat: No such file or directory$')
MISSING_FILE_RE = re.compile(r': Warning: Cannot stat: No such file or directory$')
ZIP_FILE_MODE_RE = re.compile(r'([r-][w-][SsTtx-]){3}')
INVALID_OWNER_RE = re.compile(r': Invalid owner')
INVALID_GROUP_RE = re.compile(r': Invalid group')
def crc32(path, buffer_size):
''' Return a CRC32 checksum of a file '''
crc = binascii.crc32(b'')
with open(path, 'rb') as f:
for b_block in iter(partial(f.read, buffer_size), b''):
crc = binascii.crc32(b_block, crc)
return crc & 0xffffffff
def shell_escape(string):
''' Quote meta-characters in the args for the unix shell '''
return re.sub(r'([^A-Za-z0-9_])', r'\\\1', string)
class UnarchiveError(Exception):
pass
class ZipArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
self.io_buffer_size = module.params["io_buffer_size"]
self.excludes = module.params['exclude']
self.includes = []
self.include_files = self.module.params['include']
self.cmd_path = None
self.zipinfo_cmd_path = None
self._files_in_archive = []
self._infodict = dict()
self.zipinfoflag = ''
self.binaries = (
('unzip', 'cmd_path'),
('zipinfo', 'zipinfo_cmd_path'),
)
def _permstr_to_octal(self, modestr, umask):
''' Convert a Unix permission string (rw-r--r--) into a mode (0644) '''
revstr = modestr[::-1]
mode = 0
for j in range(0, 3):
for i in range(0, 3):
if revstr[i + 3 * j] in ['r', 'w', 'x', 's', 't']:
mode += 2 ** (i + 3 * j)
# The unzip utility does not support setting the stST bits
# if revstr[i + 3 * j] in ['s', 't', 'S', 'T' ]:
# mode += 2 ** (9 + j)
return (mode & ~umask)
def _legacy_file_list(self):
rc, out, err = self.module.run_command([self.cmd_path, '-v', self.src])
if rc:
raise UnarchiveError('Neither python zipfile nor unzip can read %s' % self.src)
for line in out.splitlines()[3:-2]:
fields = line.split(None, 7)
self._files_in_archive.append(fields[7])
self._infodict[fields[7]] = int(fields[6])
def _crc32(self, path):
if self._infodict:
return self._infodict[path]
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for item in archive.infolist():
self._infodict[item.filename] = int(item.CRC)
except Exception:
archive.close()
raise UnarchiveError('Unable to list files in the archive')
return self._infodict[path]
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
self._files_in_archive = []
try:
archive = ZipFile(self.src)
except BadZipfile as e:
if e.args[0].lower().startswith('bad magic number'):
# Python2.4 can't handle zipfiles with > 64K files. Try using
# /usr/bin/unzip instead
self._legacy_file_list()
else:
raise
else:
try:
for member in archive.namelist():
if self.include_files:
for include in self.include_files:
if fnmatch.fnmatch(member, include):
self._files_in_archive.append(to_native(member))
else:
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(member, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(member))
except Exception as e:
archive.close()
raise UnarchiveError('Unable to list files in the archive: %s' % to_native(e))
archive.close()
return self._files_in_archive
def is_unarchived(self):
# BSD unzip doesn't support zipinfo listings with timestamp.
if self.zipinfoflag:
cmd = [self.zipinfo_cmd_path, self.zipinfoflag, '-T', '-s', self.src]
else:
cmd = [self.zipinfo_cmd_path, '-T', '-s', self.src]
if self.excludes:
cmd.extend(['-x', ] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
rc, out, err = self.module.run_command(cmd)
old_out = out
diff = ''
out = ''
if rc == 0:
unarchived = True
else:
unarchived = False
# Get some information related to user/group ownership
umask = os.umask(0)
os.umask(umask)
systemtype = platform.system()
# Get current user and group information
groups = os.getgroups()
run_uid = os.getuid()
run_gid = os.getgid()
try:
run_owner = pwd.getpwuid(run_uid).pw_name
except (TypeError, KeyError):
run_owner = run_uid
try:
run_group = grp.getgrgid(run_gid).gr_name
except (KeyError, ValueError, OverflowError):
run_group = run_gid
# Get future user ownership
fut_owner = fut_uid = None
if self.file_args['owner']:
try:
tpw = pwd.getpwnam(self.file_args['owner'])
except KeyError:
try:
tpw = pwd.getpwuid(int(self.file_args['owner']))
except (TypeError, KeyError, ValueError):
tpw = pwd.getpwuid(run_uid)
fut_owner = tpw.pw_name
fut_uid = tpw.pw_uid
else:
try:
fut_owner = run_owner
except Exception:
pass
fut_uid = run_uid
# Get future group ownership
fut_group = fut_gid = None
if self.file_args['group']:
try:
tgr = grp.getgrnam(self.file_args['group'])
except (ValueError, KeyError):
try:
# no need to check isdigit() explicitly here, if we fail to
# parse, the ValueError will be caught.
tgr = grp.getgrgid(int(self.file_args['group']))
except (KeyError, ValueError, OverflowError):
tgr = grp.getgrgid(run_gid)
fut_group = tgr.gr_name
fut_gid = tgr.gr_gid
else:
try:
fut_group = run_group
except Exception:
pass
fut_gid = run_gid
for line in old_out.splitlines():
change = False
pcs = line.split(None, 7)
if len(pcs) != 8:
# Too few fields... probably a piece of the header or footer
continue
# Check first and seventh field in order to skip header/footer
if len(pcs[0]) != 7 and len(pcs[0]) != 10:
continue
if len(pcs[6]) != 15:
continue
# Possible entries:
# -rw-rws--- 1.9 unx 2802 t- defX 11-Aug-91 13:48 perms.2660
# -rw-a-- 1.0 hpf 5358 Tl i4:3 4-Dec-91 11:33 longfilename.hpfs
# -r--ahs 1.1 fat 4096 b- i4:2 14-Jul-91 12:58 EA DATA. SF
# --w------- 1.0 mac 17357 bx i8:2 4-May-92 04:02 unzip.macr
if pcs[0][0] not in 'dl-?' or not frozenset(pcs[0][1:]).issubset('rwxstah-'):
continue
ztype = pcs[0][0]
permstr = pcs[0][1:]
version = pcs[1]
ostype = pcs[2]
size = int(pcs[3])
path = to_text(pcs[7], errors='surrogate_or_strict')
# Skip excluded files
if path in self.excludes:
out += 'Path %s is excluded on request\n' % path
continue
# Itemized change requires L for symlink
if path[-1] == '/':
if ztype != 'd':
err += 'Path %s incorrectly tagged as "%s", but is a directory.\n' % (path, ztype)
ftype = 'd'
elif ztype == 'l':
ftype = 'L'
elif ztype == '-':
ftype = 'f'
elif ztype == '?':
ftype = 'f'
# Some files may be storing FAT permissions, not Unix permissions
# For FAT permissions, we will use a base permissions set of 777 if the item is a directory or has the execute bit set. Otherwise, 666.
# This permission will then be modified by the system UMask.
# BSD always applies the Umask, even to Unix permissions.
# For Unix style permissions on Linux or Mac, we want to use them directly.
# So we set the UMask for this file to zero. That permission set will then be unchanged when calling _permstr_to_octal
if len(permstr) == 6:
if path[-1] == '/':
permstr = 'rwxrwxrwx'
elif permstr == 'rwx---':
permstr = 'rwxrwxrwx'
else:
permstr = 'rw-rw-rw-'
file_umask = umask
elif 'bsd' in systemtype.lower():
file_umask = umask
else:
file_umask = 0
# Test string conformity
if len(permstr) != 9 or not ZIP_FILE_MODE_RE.match(permstr):
raise UnarchiveError('ZIP info perm format incorrect, %s' % permstr)
# DEBUG
# err += "%s%s %10d %s\n" % (ztype, permstr, size, path)
b_dest = os.path.join(self.b_dest, to_bytes(path, errors='surrogate_or_strict'))
try:
st = os.lstat(b_dest)
except Exception:
change = True
self.includes.append(path)
err += 'Path %s is missing\n' % path
diff += '>%s++++++.?? %s\n' % (ftype, path)
continue
# Compare file types
if ftype == 'd' and not stat.S_ISDIR(st.st_mode):
change = True
self.includes.append(path)
err += 'File %s already exists, but not as a directory\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'f' and not stat.S_ISREG(st.st_mode):
change = True
unarchived = False
self.includes.append(path)
err += 'Directory %s already exists, but not as a regular file\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
if ftype == 'L' and not stat.S_ISLNK(st.st_mode):
change = True
self.includes.append(path)
err += 'Directory %s already exists, but not as a symlink\n' % path
diff += 'c%s++++++.?? %s\n' % (ftype, path)
continue
itemized = list('.%s.......??' % ftype)
# Note: this timestamp calculation has a rounding error
# somewhere... unzip and this timestamp can be one second off
# When that happens, we report a change and re-unzip the file
dt_object = datetime.datetime(*(time.strptime(pcs[6], '%Y%m%d.%H%M%S')[0:6]))
timestamp = time.mktime(dt_object.timetuple())
# Compare file timestamps
if stat.S_ISREG(st.st_mode):
if self.module.params['keep_newer']:
if timestamp > st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s is older, replacing file\n' % path
itemized[4] = 't'
elif stat.S_ISREG(st.st_mode) and timestamp < st.st_mtime:
# Add to excluded files, ignore other changes
out += 'File %s is newer, excluding file\n' % path
self.excludes.append(path)
continue
else:
if timestamp != st.st_mtime:
change = True
self.includes.append(path)
err += 'File %s differs in mtime (%f vs %f)\n' % (path, timestamp, st.st_mtime)
itemized[4] = 't'
# Compare file sizes
if stat.S_ISREG(st.st_mode) and size != st.st_size:
change = True
err += 'File %s differs in size (%d vs %d)\n' % (path, size, st.st_size)
itemized[3] = 's'
# Compare file checksums
if stat.S_ISREG(st.st_mode):
crc = crc32(b_dest, self.io_buffer_size)
if crc != self._crc32(path):
change = True
err += 'File %s differs in CRC32 checksum (0x%08x vs 0x%08x)\n' % (path, self._crc32(path), crc)
itemized[2] = 'c'
# Compare file permissions
# Do not handle permissions of symlinks
if ftype != 'L':
# Use the new mode provided with the action, if there is one
if self.file_args['mode']:
if isinstance(self.file_args['mode'], int):
mode = self.file_args['mode']
else:
try:
mode = int(self.file_args['mode'], 8)
except Exception as e:
try:
mode = AnsibleModule._symbolic_mode_to_octal(st, self.file_args['mode'])
except ValueError as e:
self.module.fail_json(path=path, msg="%s" % to_native(e), exception=traceback.format_exc())
# Only special files require no umask-handling
elif ztype == '?':
mode = self._permstr_to_octal(permstr, 0)
else:
mode = self._permstr_to_octal(permstr, file_umask)
if mode != stat.S_IMODE(st.st_mode):
change = True
itemized[5] = 'p'
err += 'Path %s differs in permissions (%o vs %o)\n' % (path, mode, stat.S_IMODE(st.st_mode))
# Compare file user ownership
owner = uid = None
try:
owner = pwd.getpwuid(st.st_uid).pw_name
except (TypeError, KeyError):
uid = st.st_uid
# If we are not root and requested owner is not our user, fail
if run_uid != 0 and (fut_owner != run_owner or fut_uid != run_uid):
raise UnarchiveError('Cannot change ownership of %s to %s, as user %s' % (path, fut_owner, run_owner))
if owner and owner != fut_owner:
change = True
err += 'Path %s is owned by user %s, not by user %s as expected\n' % (path, owner, fut_owner)
itemized[6] = 'o'
elif uid and uid != fut_uid:
change = True
err += 'Path %s is owned by uid %s, not by uid %s as expected\n' % (path, uid, fut_uid)
itemized[6] = 'o'
# Compare file group ownership
group = gid = None
try:
group = grp.getgrgid(st.st_gid).gr_name
except (KeyError, ValueError, OverflowError):
gid = st.st_gid
if run_uid != 0 and (fut_group != run_group or fut_gid != run_gid) and fut_gid not in groups:
raise UnarchiveError('Cannot change group ownership of %s to %s, as user %s' % (path, fut_group, run_owner))
if group and group != fut_group:
change = True
err += 'Path %s is owned by group %s, not by group %s as expected\n' % (path, group, fut_group)
itemized[6] = 'g'
elif gid and gid != fut_gid:
change = True
err += 'Path %s is owned by gid %s, not by gid %s as expected\n' % (path, gid, fut_gid)
itemized[6] = 'g'
# Register changed files and finalize diff output
if change:
if path not in self.includes:
self.includes.append(path)
diff += '%s %s\n' % (''.join(itemized), path)
if self.includes:
unarchived = False
# DEBUG
# out = old_out + out
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd, diff=diff)
def unarchive(self):
cmd = [self.cmd_path, '-o']
if self.opts:
cmd.extend(self.opts)
cmd.append(self.src)
# NOTE: Including (changed) files as arguments is problematic (limits on command line/arguments)
# if self.includes:
# NOTE: Command unzip has this strange behaviour where it expects quoted filenames to also be escaped
# cmd.extend(map(shell_escape, self.includes))
if self.excludes:
cmd.extend(['-x'] + self.excludes)
if self.include_files:
cmd.extend(self.include_files)
cmd.extend(['-d', self.b_dest])
rc, out, err = self.module.run_command(cmd)
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
missing = []
for b in self.binaries:
try:
setattr(self, b[1], get_bin_path(b[0]))
except ValueError:
missing.append(b[0])
if missing:
return False, "Unable to find required '{missing}' binary in the path.".format(missing="' or '".join(missing))
cmd = [self.cmd_path, '-l', self.src]
rc, out, err = self.module.run_command(cmd)
if rc == 0:
return True, None
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, err)
class TgzArchive(object):
def __init__(self, src, b_dest, file_args, module):
self.src = src
self.b_dest = b_dest
self.file_args = file_args
self.opts = module.params['extra_opts']
self.module = module
if self.module.check_mode:
self.module.exit_json(skipped=True, msg="remote module (%s) does not support check mode when using gtar" % self.module._name)
self.excludes = [path.rstrip('/') for path in self.module.params['exclude']]
self.include_files = self.module.params['include']
self.cmd_path = None
self.tar_type = None
self.zipflag = '-z'
self._files_in_archive = []
def _get_tar_type(self):
cmd = [self.cmd_path, '--version']
(rc, out, err) = self.module.run_command(cmd)
tar_type = None
if out.startswith('bsdtar'):
tar_type = 'bsd'
elif out.startswith('tar') and 'GNU' in out:
tar_type = 'gnu'
return tar_type
@property
def files_in_archive(self):
if self._files_in_archive:
return self._files_in_archive
cmd = [self.cmd_path, '--list', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
if rc != 0:
raise UnarchiveError('Unable to list files in the archive: %s' % err)
for filename in out.splitlines():
# Compensate for locale-related problems in gtar output (octal unicode representation) #11348
# filename = filename.decode('string_escape')
filename = to_native(codecs.escape_decode(filename)[0])
# We don't allow absolute filenames. If the user wants to unarchive rooted in "/"
# they need to use "dest: '/'". This follows the defaults for gtar, pax, etc.
# Allowing absolute filenames here also causes bugs: https://github.com/ansible/ansible/issues/21397
if filename.startswith('/'):
filename = filename[1:]
exclude_flag = False
if self.excludes:
for exclude in self.excludes:
if fnmatch.fnmatch(filename, exclude):
exclude_flag = True
break
if not exclude_flag:
self._files_in_archive.append(to_native(filename))
return self._files_in_archive
def is_unarchived(self):
cmd = [self.cmd_path, '--diff', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
# Check whether the differences are in something that we're
# setting anyway
# What is different
unarchived = True
old_out = out
out = ''
run_uid = os.getuid()
# When unarchiving as a user, or when owner/group/mode is supplied --diff is insufficient
# Only way to be sure is to check request with what is on disk (as we do for zip)
# Leave this up to set_fs_attributes_if_different() instead of inducing a (false) change
for line in old_out.splitlines() + err.splitlines():
# FIXME: Remove the bogus lines from error-output as well !
# Ignore bogus errors on empty filenames (when using --split-component)
if EMPTY_FILE_RE.search(line):
continue
if run_uid == 0 and not self.file_args['owner'] and OWNER_DIFF_RE.search(line):
out += line + '\n'
if run_uid == 0 and not self.file_args['group'] and GROUP_DIFF_RE.search(line):
out += line + '\n'
if not self.file_args['mode'] and MODE_DIFF_RE.search(line):
out += line + '\n'
if MOD_TIME_DIFF_RE.search(line):
out += line + '\n'
if MISSING_FILE_RE.search(line):
out += line + '\n'
if INVALID_OWNER_RE.search(line):
out += line + '\n'
if INVALID_GROUP_RE.search(line):
out += line + '\n'
if out:
unarchived = False
return dict(unarchived=unarchived, rc=rc, out=out, err=err, cmd=cmd)
def unarchive(self):
cmd = [self.cmd_path, '--extract', '-C', self.b_dest]
if self.zipflag:
cmd.append(self.zipflag)
if self.opts:
cmd.extend(['--show-transformed-names'] + self.opts)
if self.file_args['owner']:
cmd.append('--owner=' + quote(self.file_args['owner']))
if self.file_args['group']:
cmd.append('--group=' + quote(self.file_args['group']))
if self.module.params['keep_newer']:
cmd.append('--keep-newer-files')
if self.excludes:
cmd.extend(['--exclude=' + f for f in self.excludes])
cmd.extend(['-f', self.src])
if self.include_files:
cmd.extend(self.include_files)
locale = get_best_parsable_locale(self.module)
rc, out, err = self.module.run_command(cmd, cwd=self.b_dest, environ_update=dict(LANG=locale, LC_ALL=locale, LC_MESSAGES=locale, LANGUAGE=locale))
return dict(cmd=cmd, rc=rc, out=out, err=err)
def can_handle_archive(self):
# Prefer gtar (GNU tar) as it supports the compression options -z, -j and -J
try:
self.cmd_path = get_bin_path('gtar')
except ValueError:
# Fallback to tar
try:
self.cmd_path = get_bin_path('tar')
except ValueError:
return False, "Unable to find required 'gtar' or 'tar' binary in the path"
self.tar_type = self._get_tar_type()
if self.tar_type != 'gnu':
return False, 'Command "%s" detected as tar type %s. GNU tar required.' % (self.cmd_path, self.tar_type)
try:
if self.files_in_archive:
return True, None
except UnarchiveError as e:
return False, 'Command "%s" could not handle archive: %s' % (self.cmd_path, to_native(e))
# Errors and no files in archive assume that we weren't able to
# properly unarchive it
return False, 'Command "%s" found no files in archive. Empty archive files are not supported.' % self.cmd_path
# Class to handle tar files that aren't compressed
class TarArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarArchive, self).__init__(src, b_dest, file_args, module)
# argument to tar
self.zipflag = ''
# Class to handle bzip2 compressed tar files
class TarBzipArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarBzipArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-j'
# Class to handle xz compressed tar files
class TarXzArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarXzArchive, self).__init__(src, b_dest, file_args, module)
self.zipflag = '-J'
# Class to handle zstd compressed tar files
class TarZstdArchive(TgzArchive):
def __init__(self, src, b_dest, file_args, module):
super(TarZstdArchive, self).__init__(src, b_dest, file_args, module)
# GNU Tar supports the --use-compress-program option to
# specify which executable to use for
# compression/decompression.
#
# Note: some flavors of BSD tar support --zstd (e.g., FreeBSD
# 12.2), but the TgzArchive class only supports GNU Tar.
self.zipflag = '--use-compress-program=zstd'
class ZipZArchive(ZipArchive):
def __init__(self, src, b_dest, file_args, module):
super(ZipZArchive, self).__init__(src, b_dest, file_args, module)
self.zipinfoflag = '-Z'
self.binaries = (
('unzip', 'cmd_path'),
('unzip', 'zipinfo_cmd_path'),
)
def can_handle_archive(self):
unzip_available, error_msg = super(ZipZArchive, self).can_handle_archive()
if not unzip_available:
return unzip_available, error_msg
# Ensure unzip -Z is available before we use it in is_unarchive
cmd = [self.zipinfo_cmd_path, self.zipinfoflag]
rc, out, err = self.module.run_command(cmd)
if 'zipinfo' in out.lower():
return True, None
return False, 'Command "unzip -Z" could not handle archive: %s' % err
# try handlers in order and return the one that works or bail if none work
def pick_handler(src, dest, file_args, module):
handlers = [ZipArchive, ZipZArchive, TgzArchive, TarArchive, TarBzipArchive, TarXzArchive, TarZstdArchive]
reasons = set()
for handler in handlers:
obj = handler(src, dest, file_args, module)
(can_handle, reason) = obj.can_handle_archive()
if can_handle:
return obj
reasons.add(reason)
reason_msg = '\n'.join(reasons)
module.fail_json(msg='Failed to find handler for "%s". Make sure the required command to extract the file is installed.\n%s' % (src, reason_msg))
def main():
module = AnsibleModule(
# not checking because of daisy chain to file module
argument_spec=dict(
src=dict(type='path', required=True),
dest=dict(type='path', required=True),
remote_src=dict(type='bool', default=False),
creates=dict(type='path'),
list_files=dict(type='bool', default=False),
keep_newer=dict(type='bool', default=False),
exclude=dict(type='list', elements='str', default=[]),
include=dict(type='list', elements='str', default=[]),
extra_opts=dict(type='list', elements='str', default=[]),
validate_certs=dict(type='bool', default=True),
io_buffer_size=dict(type='int', default=64 * 1024),
# Options that are for the action plugin, but ignored by the module itself.
# We have them here so that the sanity tests pass without ignores, which
# reduces the likelihood of further bugs added.
copy=dict(type='bool', default=True),
decrypt=dict(type='bool', default=True),
),
add_file_common_args=True,
# check-mode only works for zip files, we cover that later
supports_check_mode=True,
mutually_exclusive=[('include', 'exclude')],
)
src = module.params['src']
dest = module.params['dest']
b_dest = to_bytes(dest, errors='surrogate_or_strict')
remote_src = module.params['remote_src']
file_args = module.load_file_common_arguments(module.params)
# did tar file arrive?
if not os.path.exists(src):
if not remote_src:
module.fail_json(msg="Source '%s' failed to transfer" % src)
# If remote_src=true, and src= contains ://, try and download the file to a temp directory.
elif '://' in src:
src = fetch_file(module, src)
else:
module.fail_json(msg="Source '%s' does not exist" % src)
if not os.access(src, os.R_OK):
module.fail_json(msg="Source '%s' not readable" % src)
# skip working with 0 size archives
try:
if os.path.getsize(src) == 0:
module.fail_json(msg="Invalid archive '%s', the file is 0 bytes" % src)
except Exception as e:
module.fail_json(msg="Source '%s' not readable, %s" % (src, to_native(e)))
# is dest OK to receive tar file?
if not os.path.isdir(b_dest):
module.fail_json(msg="Destination '%s' is not a directory" % dest)
handler = pick_handler(src, b_dest, file_args, module)
res_args = dict(handler=handler.__class__.__name__, dest=dest, src=src)
# do we need to do unpack?
check_results = handler.is_unarchived()
# DEBUG
# res_args['check_results'] = check_results
if module.check_mode:
res_args['changed'] = not check_results['unarchived']
elif check_results['unarchived']:
res_args['changed'] = False
else:
# do the unpack
try:
res_args['extract_results'] = handler.unarchive()
if res_args['extract_results']['rc'] != 0:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
except IOError:
module.fail_json(msg="failed to unpack %s to %s" % (src, dest), **res_args)
else:
res_args['changed'] = True
# Get diff if required
if check_results.get('diff', False):
res_args['diff'] = {'prepared': check_results['diff']}
# Run only if we found differences (idempotence) or diff was missing
if res_args.get('diff', True) and not module.check_mode:
# do we need to change perms?
top_folders = []
for filename in handler.files_in_archive:
file_args['path'] = os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict'))
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if '/' in filename:
top_folder_path = filename.split('/')[0]
if top_folder_path not in top_folders:
top_folders.append(top_folder_path)
# make sure top folders have the right permissions
# https://github.com/ansible/ansible/issues/35426
if top_folders:
for f in top_folders:
file_args['path'] = "%s/%s" % (dest, f)
try:
res_args['changed'] = module.set_fs_attributes_if_different(file_args, res_args['changed'], expand=False)
except (IOError, OSError) as e:
module.fail_json(msg="Unexpected error when accessing exploded file: %s" % to_native(e), **res_args)
if module.params['list_files']:
res_args['files'] = handler.files_in_archive
module.exit_json(**res_args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,690 |
Dev Guide doc: Duplicate paragraph in the section "Why test your Ansible contributions"
|
### Summary
The introduction paragraph below is duplicated later in the guide. https://github.com/ansible/ansible/blame/devel/docs/docsite/rst/dev_guide/testing.rst#L14
Here's the duplicated section:
https://github.com/ansible/ansible/blame/devel/docs/docsite/rst/dev_guide/testing.rst#L44
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/testing.rst
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.3]
config file = None
configured module search path = ['/Users/william/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/Cellar/ansible/6.3.0/libexec/lib/python3.10/site-packages/ansible
ansible collection location = /Users/william/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.10.6 (main, Aug 11 2022, 13:49:25) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Macos Montrey
### Additional Information
The improvement will remove a redundant paragraph from the guide. The guide will be concise.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78690
|
https://github.com/ansible/ansible/pull/78691
|
c7b4a25f9d668c45436ef57503c41bd1abb9d4fd
|
e276770ee9efac3a98c3f1116d9cd9c992ca8c9e
| 2022-09-01T19:58:45Z |
python
| 2022-09-15T15:46:57Z |
docs/docsite/rst/dev_guide/testing.rst
|
.. _developing_testing:
***************
Testing Ansible
***************
.. contents::
:local:
Why test your Ansible contributions?
====================================
If you're a developer, one of the most valuable things you can do is to look at GitHub issues and help fix bugs, since bug-fixing is almost always prioritized over feature development. Even for non-developers, helping to test pull requests for bug fixes and features is still immensely valuable.
Ansible users who understand how to write playbooks and roles should be able to test their work. GitHub pull requests will automatically run a variety of tests (for example, Azure Pipelines) that show bugs in action. However, contributors must also test their work outside of the automated GitHub checks and show evidence of these tests in the PR to ensure that their work will be more likely to be reviewed and merged.
Read on to learn how Ansible is tested, how to test your contributions locally, and how to extend testing capabilities.
If you want to learn about testing collections, read :ref:`testing_collections`
Types of tests
==============
At a high level we have the following classifications of tests:
:compile:
* :ref:`testing_compile`
* Test python code against a variety of Python versions.
:sanity:
* :ref:`testing_sanity`
* Sanity tests are made up of scripts and tools used to perform static code analysis.
* The primary purpose of these tests is to enforce Ansible coding standards and requirements.
:integration:
* :ref:`testing_integration`
* Functional tests of modules and Ansible core functionality.
:units:
* :ref:`testing_units`
* Tests directly against individual parts of the code base.
If you're a developer, one of the most valuable things you can do is look at the GitHub
issues list and help fix bugs. We almost always prioritize bug fixing over feature
development.
Even for non developers, helping to test pull requests for bug fixes and features is still
immensely valuable. Ansible users who understand writing playbooks and roles should be
able to add integration tests and so GitHub pull requests with integration tests that show
bugs in action will also be a great way to help.
Testing within GitHub & Azure Pipelines
=======================================
Organization
------------
When Pull Requests (PRs) are created they are tested using Azure Pipelines, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
When Azure Pipelines detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example:
.. code-block:: text
The test `ansible-test sanity --test pep8` failed with the following errors:
lib/ansible/modules/network/foo/bar.py:509:17: E265 block comment should start with '# '
The test `ansible-test sanity --test validate-modules` failed with the following error:
lib/ansible/modules/network/foo/bar.py:0:0: E307 version_added should be 2.4. Currently 2.3
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified an issue. The commands given allow you to run the same tests locally to ensure you've fixed all issues without having to push your changes to GitHub and wait for Azure Pipelines, for example:
If you haven't already got Ansible available, use the local checkout by running:
.. code-block:: shell-session
source hacking/env-setup
Then run the tests detailed in the GitHub comment:
.. code-block:: shell-session
ansible-test sanity --test pep8
ansible-test sanity --test validate-modules
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
Rerunning a failing CI job
--------------------------
Occasionally you may find your PR fails due to a reason unrelated to your change. This could happen for several reasons, including:
* a temporary issue accessing an external resource, such as a yum or git repo
* a timeout creating a virtual machine to run the tests on
If either of these issues appear to be the case, you can rerun the Azure Pipelines test by:
* adding a comment with ``/rebuild`` (full rebuild) or ``/rebuild_failed`` (rebuild only failed CI nodes) to the PR
* closing and re-opening the PR (full rebuild)
* making another change to the PR and pushing to GitHub
If the issue persists, please contact us in the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
How to test a PR
================
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
Setup: Checking out a Pull Request
----------------------------------
You can do this by:
* checking out Ansible
* fetching the proposed changes into a test branch
* testing
* commenting on that particular issue on GitHub
Here's how:
.. warning::
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
other flavors, since some features (for example, package managers such as apt or yum) are specific to those OS versions.
Create a fresh area to work:
.. code-block:: shell-session
git clone https://github.com/ansible/ansible.git ansible-pr-testing
cd ansible-pr-testing
Next, find the pull request you'd like to test and make note of its number. It will look something like this::
Use os.path.sep instead of hardcoding / #65381
.. note:: Only test ``ansible:devel``
It is important that the PR request target be ``ansible:devel``, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
Use the pull request number when you fetch the proposed changes and create your branch for testing:
.. code-block:: shell-session
git fetch origin refs/pull/XXXX/head:testing_PRXXXX
git checkout testing_PRXXXX
The first command fetches the proposed changes from the pull request and creates a new branch named ``testing_PRXXXX``, where the XXXX is the actual number associated with the pull request (for example, 65381). The second command checks out the newly created branch.
.. note::
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
.. note::
Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of ``devel``. If the source looks like ``someuser:devel``, make sure there is only one commit listed on the pull request.
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
full installation that is frequently used by developers on Ansible.
Simply source it (to use the Linux/Unix terminology) to begin using it immediately:
.. code-block:: shell-session
source ./hacking/env-setup
This script modifies the ``PYTHONPATH`` environment variables (along with a few other things), which will be temporarily
set as long as your shell session is open.
Testing the Pull Request
------------------------
At this point, you should be ready to begin testing!
Some ideas of what to test are:
* Create a test Playbook with the examples in and check if they function correctly
* Test to see if any Python backtraces returned (that's a bug)
* Test on different operating systems, or against different library versions
Run sanity tests
^^^^^^^^^^^^^^^^
.. code:: shell
ansible-test sanity
More information: :ref:`testing_sanity`
Run unit tests
^^^^^^^^^^^^^^
.. code:: shell
ansible-test units
More information: :ref:`testing_units`
Run integration tests
^^^^^^^^^^^^^^^^^^^^^
.. code:: shell
ansible-test integration -v ping
More information: :ref:`testing_integration`
Any potential issues should be added as comments on the pull request (and it's acceptable to comment if the feature works as well), remembering to include the output of ``ansible --version``
Example::
Works for me! Tested on `Ansible 2.3.0`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
| This change causes errors for me.
|
| When I ran this Ubuntu 16.04 it failed with the following:
|
| \```
| some output
| StackTrace
| some other output
| \```
Code Coverage Online
^^^^^^^^^^^^^^^^^^^^
`The online code coverage reports <https://codecov.io/gh/ansible/ansible>`_ are a good way
to identify areas for testing improvement in Ansible. By following red colors you can
drill down through the reports to find files which have no tests at all. Adding both
integration and unit tests which show clearly how code should work, verify important
Ansible functions and increase testing coverage in areas where there is none is a valuable
way to help improve Ansible.
The code coverage reports only cover the ``devel`` branch of Ansible where new feature
development takes place. Pull requests and new code will be missing from the codecov.io
coverage reports so local reporting is needed. Most ``ansible-test`` commands allow you
to collect code coverage, this is particularly useful to indicate where to extend
testing. See :ref:`testing_running_locally` for more information.
Want to know more about testing?
================================
If you'd like to know more about the plans for improving testing Ansible then why not join the
`Testing Working Group <https://github.com/ansible/community/blob/main/meetings/README.md>`_.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,762 |
core 2.13.4 breaking change in apt only_upgrade
|
### Summary
Using `apt` with `only_upgrade: yes` fails if a package is currently not installed. It seems this was recently introduced with https://github.com/ansible/ansible/pull/78327. For us this was a breaking change - was this intentionally?
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
empty for me
```
### OS / Environment
ubuntu/focal64 20220804.0.0 (vagrant box)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: upgrade if installed
apt:
name: "{{ packages }}"
only_upgrade: yes
vars:
packages:
- foo # not installed
- bar
```
### Expected Results
no failure if a package is currently not installed
### Actual Results
```console
TASK [my_role : upgrade if installed.] ***********************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"cache_update_time": 1663062238, "cache_updated": false, "changed": false, "msg": "no available installation candidate for foo"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78762
|
https://github.com/ansible/ansible/pull/78781
|
9bc4fa496ca06d21b347071078b0f12343481e07
|
4b45b4b09d9257006f7b23237293c8c1a04521d8
| 2022-09-13T16:11:10Z |
python
| 2022-09-15T19:42:34Z |
changelogs/fragments/78781-fix-apt-only_upgrade-behavior.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,762 |
core 2.13.4 breaking change in apt only_upgrade
|
### Summary
Using `apt` with `only_upgrade: yes` fails if a package is currently not installed. It seems this was recently introduced with https://github.com/ansible/ansible/pull/78327. For us this was a breaking change - was this intentionally?
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
empty for me
```
### OS / Environment
ubuntu/focal64 20220804.0.0 (vagrant box)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: upgrade if installed
apt:
name: "{{ packages }}"
only_upgrade: yes
vars:
packages:
- foo # not installed
- bar
```
### Expected Results
no failure if a package is currently not installed
### Actual Results
```console
TASK [my_role : upgrade if installed.] ***********************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"cache_update_time": 1663062238, "cache_updated": false, "changed": false, "msg": "no available installation candidate for foo"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78762
|
https://github.com/ansible/ansible/pull/78781
|
9bc4fa496ca06d21b347071078b0f12343481e07
|
4b45b4b09d9257006f7b23237293c8c1a04521d8
| 2022-09-13T16:11:10Z |
python
| 2022-09-15T19:42:34Z |
lib/ansible/modules/apt.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2012, Flowroute LLC
# Written by Matthew Williams <[email protected]>
# Based on yum module written by Seth Vidal <skvidal at fedoraproject.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import absolute_import, division, print_function
__metaclass__ = type
DOCUMENTATION = '''
---
module: apt
short_description: Manages apt-packages
description:
- Manages I(apt) packages (such as for Debian/Ubuntu).
version_added: "0.0.2"
options:
name:
description:
- A list of package names, like C(foo), or package specifier with version, like C(foo=1.0) or C(foo>=1.0).
Name wildcards (fnmatch) like C(apt*) and version wildcards like C(foo=1.0*) are also supported.
aliases: [ package, pkg ]
type: list
elements: str
state:
description:
- Indicates the desired package state. C(latest) ensures that the latest version is installed. C(build-dep) ensures the package build dependencies
are installed. C(fixed) attempt to correct a system with broken dependencies in place.
type: str
default: present
choices: [ absent, build-dep, latest, present, fixed ]
update_cache:
description:
- Run the equivalent of C(apt-get update) before the operation. Can be run as part of the package installation or as a separate step.
- Default is not to update the cache.
aliases: [ update-cache ]
type: bool
update_cache_retries:
description:
- Amount of retries if the cache update fails. Also see I(update_cache_retry_max_delay).
type: int
default: 5
version_added: '2.10'
update_cache_retry_max_delay:
description:
- Use an exponential backoff delay for each retry (see I(update_cache_retries)) up to this max delay in seconds.
type: int
default: 12
version_added: '2.10'
cache_valid_time:
description:
- Update the apt cache if it is older than the I(cache_valid_time). This option is set in seconds.
- As of Ansible 2.4, if explicitly set, this sets I(update_cache=yes).
type: int
default: 0
purge:
description:
- Will force purging of configuration files if the module state is set to I(absent).
type: bool
default: 'no'
default_release:
description:
- Corresponds to the C(-t) option for I(apt) and sets pin priorities
aliases: [ default-release ]
type: str
install_recommends:
description:
- Corresponds to the C(--no-install-recommends) option for I(apt). C(yes) installs recommended packages. C(no) does not install
recommended packages. By default, Ansible will use the same defaults as the operating system. Suggested packages are never installed.
aliases: [ install-recommends ]
type: bool
force:
description:
- 'Corresponds to the C(--force-yes) to I(apt-get) and implies C(allow_unauthenticated: yes) and C(allow_downgrade: yes)'
- "This option will disable checking both the packages' signatures and the certificates of the
web servers they are downloaded from."
- 'This option *is not* the equivalent of passing the C(-f) flag to I(apt-get) on the command line'
- '**This is a destructive operation with the potential to destroy your system, and it should almost never be used.**
Please also see C(man apt-get) for more information.'
type: bool
default: 'no'
clean:
description:
- Run the equivalent of C(apt-get clean) to clear out the local repository of retrieved package files. It removes everything but
the lock file from /var/cache/apt/archives/ and /var/cache/apt/archives/partial/.
- Can be run as part of the package installation (clean runs before install) or as a separate step.
type: bool
default: 'no'
version_added: "2.13"
allow_unauthenticated:
description:
- Ignore if packages cannot be authenticated. This is useful for bootstrapping environments that manage their own apt-key setup.
- 'C(allow_unauthenticated) is only supported with state: I(install)/I(present)'
aliases: [ allow-unauthenticated ]
type: bool
default: 'no'
version_added: "2.1"
allow_downgrade:
description:
- Corresponds to the C(--allow-downgrades) option for I(apt).
- This option enables the named package and version to replace an already installed higher version of that package.
- Note that setting I(allow_downgrade=true) can make this module behave in a non-idempotent way.
- (The task could end up with a set of packages that does not match the complete list of specified packages to install).
aliases: [ allow-downgrade, allow_downgrades, allow-downgrades ]
type: bool
default: 'no'
version_added: "2.12"
allow_change_held_packages:
description:
- Allows changing the version of a package which is on the apt hold list
type: bool
default: 'no'
version_added: '2.13'
upgrade:
description:
- If yes or safe, performs an aptitude safe-upgrade.
- If full, performs an aptitude full-upgrade.
- If dist, performs an apt-get dist-upgrade.
- 'Note: This does not upgrade a specific package, use state=latest for that.'
- 'Note: Since 2.4, apt-get is used as a fall-back if aptitude is not present.'
version_added: "1.1"
choices: [ dist, full, 'no', safe, 'yes' ]
default: 'no'
type: str
dpkg_options:
description:
- Add dpkg options to apt command. Defaults to '-o "Dpkg::Options::=--force-confdef" -o "Dpkg::Options::=--force-confold"'
- Options should be supplied as comma separated list
default: force-confdef,force-confold
type: str
deb:
description:
- Path to a .deb package on the remote machine.
- If :// in the path, ansible will attempt to download deb before installing. (Version added 2.1)
- Requires the C(xz-utils) package to extract the control file of the deb package to install.
type: path
required: false
version_added: "1.6"
autoremove:
description:
- If C(yes), remove unused dependency packages for all module states except I(build-dep). It can also be used as the only option.
- Previous to version 2.4, autoclean was also an alias for autoremove, now it is its own separate command. See documentation for further information.
type: bool
default: 'no'
version_added: "2.1"
autoclean:
description:
- If C(yes), cleans the local repository of retrieved package files that can no longer be downloaded.
type: bool
default: 'no'
version_added: "2.4"
policy_rc_d:
description:
- Force the exit code of /usr/sbin/policy-rc.d.
- For example, if I(policy_rc_d=101) the installed package will not trigger a service start.
- If /usr/sbin/policy-rc.d already exists, it is backed up and restored after the package installation.
- If C(null), the /usr/sbin/policy-rc.d isn't created/changed.
type: int
default: null
version_added: "2.8"
only_upgrade:
description:
- Only upgrade a package if it is already installed.
type: bool
default: 'no'
version_added: "2.1"
fail_on_autoremove:
description:
- 'Corresponds to the C(--no-remove) option for C(apt).'
- 'If C(yes), it is ensured that no packages will be removed or the task will fail.'
- 'C(fail_on_autoremove) is only supported with state except C(absent)'
type: bool
default: 'no'
version_added: "2.11"
force_apt_get:
description:
- Force usage of apt-get instead of aptitude
type: bool
default: 'no'
version_added: "2.4"
lock_timeout:
description:
- How many seconds will this action wait to acquire a lock on the apt db.
- Sometimes there is a transitory lock and this will retry at least until timeout is hit.
type: int
default: 60
version_added: "2.12"
requirements:
- python-apt (python 2)
- python3-apt (python 3)
- aptitude (before 2.4)
author: "Matthew Williams (@mgwilliams)"
extends_documentation_fragment: action_common_attributes
attributes:
check_mode:
support: full
diff_mode:
support: full
platform:
platforms: debian
notes:
- Three of the upgrade modes (C(full), C(safe) and its alias C(yes)) required C(aptitude) up to 2.3, since 2.4 C(apt-get) is used as a fall-back.
- In most cases, packages installed with apt will start newly installed services by default. Most distributions have mechanisms to avoid this.
For example when installing Postgresql-9.5 in Debian 9, creating an excutable shell script (/usr/sbin/policy-rc.d) that throws
a return code of 101 will stop Postgresql 9.5 starting up after install. Remove the file or remove its execute permission afterwards.
- The apt-get commandline supports implicit regex matches here but we do not because it can let typos through easier
(If you typo C(foo) as C(fo) apt-get would install packages that have "fo" in their name with a warning and a prompt for the user.
Since we don't have warnings and prompts before installing we disallow this.Use an explicit fnmatch pattern if you want wildcarding)
- When used with a C(loop:) each package will be processed individually, it is much more efficient to pass the list directly to the I(name) option.
- When C(default_release) is used, an implicit priority of 990 is used. This is the same behavior as C(apt-get -t).
- When an exact version is specified, an implicit priority of 1001 is used.
'''
EXAMPLES = '''
- name: Install apache httpd (state=present is optional)
ansible.builtin.apt:
name: apache2
state: present
- name: Update repositories cache and install "foo" package
ansible.builtin.apt:
name: foo
update_cache: yes
- name: Remove "foo" package
ansible.builtin.apt:
name: foo
state: absent
- name: Install the package "foo"
ansible.builtin.apt:
name: foo
- name: Install a list of packages
ansible.builtin.apt:
pkg:
- foo
- foo-tools
- name: Install the version '1.00' of package "foo"
ansible.builtin.apt:
name: foo=1.00
- name: Update the repository cache and update package "nginx" to latest version using default release squeeze-backport
ansible.builtin.apt:
name: nginx
state: latest
default_release: squeeze-backports
update_cache: yes
- name: Install the version '1.18.0' of package "nginx" and allow potential downgrades
ansible.builtin.apt:
name: nginx=1.18.0
state: present
allow_downgrade: yes
- name: Install zfsutils-linux with ensuring conflicted packages (e.g. zfs-fuse) will not be removed.
ansible.builtin.apt:
name: zfsutils-linux
state: latest
fail_on_autoremove: yes
- name: Install latest version of "openjdk-6-jdk" ignoring "install-recommends"
ansible.builtin.apt:
name: openjdk-6-jdk
state: latest
install_recommends: no
- name: Update all packages to their latest version
ansible.builtin.apt:
name: "*"
state: latest
- name: Upgrade the OS (apt-get dist-upgrade)
ansible.builtin.apt:
upgrade: dist
- name: Run the equivalent of "apt-get update" as a separate step
ansible.builtin.apt:
update_cache: yes
- name: Only run "update_cache=yes" if the last one is more than 3600 seconds ago
ansible.builtin.apt:
update_cache: yes
cache_valid_time: 3600
- name: Pass options to dpkg on run
ansible.builtin.apt:
upgrade: dist
update_cache: yes
dpkg_options: 'force-confold,force-confdef'
- name: Install a .deb package
ansible.builtin.apt:
deb: /tmp/mypackage.deb
- name: Install the build dependencies for package "foo"
ansible.builtin.apt:
pkg: foo
state: build-dep
- name: Install a .deb package from the internet
ansible.builtin.apt:
deb: https://example.com/python-ppq_0.1-1_all.deb
- name: Remove useless packages from the cache
ansible.builtin.apt:
autoclean: yes
- name: Remove dependencies that are no longer required
ansible.builtin.apt:
autoremove: yes
- name: Run the equivalent of "apt-get clean" as a separate step
apt:
clean: yes
'''
RETURN = '''
cache_updated:
description: if the cache was updated or not
returned: success, in some cases
type: bool
sample: True
cache_update_time:
description: time of the last cache update (0 if unknown)
returned: success, in some cases
type: int
sample: 1425828348000
stdout:
description: output from apt
returned: success, when needed
type: str
sample: |-
Reading package lists...
Building dependency tree...
Reading state information...
The following extra packages will be installed:
apache2-bin ...
stderr:
description: error output from apt
returned: success, when needed
type: str
sample: "AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to ..."
''' # NOQA
# added to stave off future warnings about apt api
import warnings
warnings.filterwarnings('ignore', "apt API not stable yet", FutureWarning)
import datetime
import fnmatch
import itertools
import os
import random
import re
import shutil
import sys
import tempfile
import time
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.common.locale import get_best_parsable_locale
from ansible.module_utils.common.respawn import has_respawned, probe_interpreters_for_module, respawn_module
from ansible.module_utils._text import to_native
from ansible.module_utils.six import PY3
from ansible.module_utils.urls import fetch_file
DPKG_OPTIONS = 'force-confdef,force-confold'
APT_GET_ZERO = "\n0 upgraded, 0 newly installed"
APTITUDE_ZERO = "\n0 packages upgraded, 0 newly installed"
APT_LISTS_PATH = "/var/lib/apt/lists"
APT_UPDATE_SUCCESS_STAMP_PATH = "/var/lib/apt/periodic/update-success-stamp"
APT_MARK_INVALID_OP = 'Invalid operation'
APT_MARK_INVALID_OP_DEB6 = 'Usage: apt-mark [options] {markauto|unmarkauto} packages'
CLEAN_OP_CHANGED_STR = dict(
autoremove='The following packages will be REMOVED',
# "Del python3-q 2.4-1 [24 kB]"
autoclean='Del ',
)
HAS_PYTHON_APT = False
try:
import apt
import apt.debfile
import apt_pkg
HAS_PYTHON_APT = True
except ImportError:
apt = apt_pkg = None
class PolicyRcD(object):
"""
This class is a context manager for the /usr/sbin/policy-rc.d file.
It allow the user to prevent dpkg to start the corresponding service when installing
a package.
https://people.debian.org/~hmh/invokerc.d-policyrc.d-specification.txt
"""
def __init__(self, module):
# we need the module for later use (eg. fail_json)
self.m = module
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists
# we will back it up during package installation
# then restore it
if os.path.exists('/usr/sbin/policy-rc.d'):
self.backup_dir = tempfile.mkdtemp(prefix="ansible")
else:
self.backup_dir = None
def __enter__(self):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
# if the /usr/sbin/policy-rc.d already exists we back it up
if self.backup_dir:
try:
shutil.move('/usr/sbin/policy-rc.d', self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move /usr/sbin/policy-rc.d to %s" % self.backup_dir)
# we write /usr/sbin/policy-rc.d so it always exits with code policy_rc_d
try:
with open('/usr/sbin/policy-rc.d', 'w') as policy_rc_d:
policy_rc_d.write('#!/bin/sh\nexit %d\n' % self.m.params['policy_rc_d'])
os.chmod('/usr/sbin/policy-rc.d', 0o0755)
except Exception:
self.m.fail_json(msg="Failed to create or chmod /usr/sbin/policy-rc.d")
def __exit__(self, type, value, traceback):
"""
This method will be called when we enter the context, before we call `apt-get …`
"""
# if policy_rc_d is null then we don't need to modify policy-rc.d
if self.m.params['policy_rc_d'] is None:
return
if self.backup_dir:
# if /usr/sbin/policy-rc.d already exists before the call to __enter__
# we restore it (from the backup done in __enter__)
try:
shutil.move(os.path.join(self.backup_dir, 'policy-rc.d'),
'/usr/sbin/policy-rc.d')
os.rmdir(self.backup_dir)
except Exception:
self.m.fail_json(msg="Fail to move back %s to /usr/sbin/policy-rc.d"
% os.path.join(self.backup_dir, 'policy-rc.d'))
else:
# if there wasn't a /usr/sbin/policy-rc.d file before the call to __enter__
# we just remove the file
try:
os.remove('/usr/sbin/policy-rc.d')
except Exception:
self.m.fail_json(msg="Fail to remove /usr/sbin/policy-rc.d (after package manipulation)")
def package_split(pkgspec):
parts = re.split(r'(>?=)', pkgspec, 1)
if len(parts) > 1:
return parts
return parts[0], None, None
def package_version_compare(version, other_version):
try:
return apt_pkg.version_compare(version, other_version)
except AttributeError:
return apt_pkg.VersionCompare(version, other_version)
def package_best_match(pkgname, version_cmp, version, release, cache):
policy = apt_pkg.Policy(cache)
policy.read_pinfile(apt_pkg.config.find_file("Dir::Etc::preferences"))
policy.read_pindir(apt_pkg.config.find_file("Dir::Etc::preferencesparts"))
if release:
# 990 is the priority used in `apt-get -t`
policy.create_pin('Release', pkgname, release, 990)
if version_cmp == "=":
# Installing a specific version from command line overrides all pinning
# We don't mimmic this exactly, but instead set a priority which is higher than all APT built-in pin priorities.
policy.create_pin('Version', pkgname, version, 1001)
pkg = cache[pkgname]
pkgver = policy.get_candidate_ver(pkg)
if not pkgver:
return None
if version_cmp == "=" and not fnmatch.fnmatch(pkgver.ver_str, version):
# Even though we put in a pin policy, it can be ignored if there is no
# possible candidate.
return None
return pkgver.ver_str
def package_status(m, pkgname, version_cmp, version, default_release, cache, state):
"""
:return: A tuple of (installed, installed_version, version_installable, has_files). *installed* indicates whether
the package (regardless of version) is installed. *installed_version* indicates whether the installed package
matches the provided version criteria. *version_installable* provides the latest matching version that can be
installed. In the case of virtual packages where we can't determine an applicable match, True is returned.
*has_files* indicates whether the package has files on the filesystem (even if not installed, meaning a purge is
required).
"""
try:
# get the package from the cache, as well as the
# low-level apt_pkg.Package object which contains
# state fields not directly accessible from the
# higher-level apt.package.Package object.
pkg = cache[pkgname]
ll_pkg = cache._cache[pkgname] # the low-level package object
except KeyError:
if state == 'install':
try:
provided_packages = cache.get_providing_packages(pkgname)
if provided_packages:
# When this is a virtual package satisfied by only
# one installed package, return the status of the target
# package to avoid requesting re-install
if cache.is_virtual_package(pkgname) and len(provided_packages) == 1:
package = provided_packages[0]
installed, installed_version, version_installable, has_files = \
package_status(m, package.name, version_cmp, version, default_release, cache, state='install')
if installed:
return installed, installed_version, version_installable, has_files
# Otherwise return nothing so apt will sort out
# what package to satisfy this with
return False, False, True, False
m.fail_json(msg="No package matching '%s' is available" % pkgname)
except AttributeError:
# python-apt version too old to detect virtual packages
# mark as not installed and let apt-get install deal with it
return False, False, True, False
else:
return False, False, None, False
try:
has_files = len(pkg.installed_files) > 0
except UnicodeDecodeError:
has_files = True
except AttributeError:
has_files = False # older python-apt cannot be used to determine non-purged
try:
package_is_installed = ll_pkg.current_state == apt_pkg.CURSTATE_INSTALLED
except AttributeError: # python-apt 0.7.X has very weak low-level object
try:
# might not be necessary as python-apt post-0.7.X should have current_state property
package_is_installed = pkg.is_installed
except AttributeError:
# assume older version of python-apt is installed
package_is_installed = pkg.isInstalled
version_best = package_best_match(pkgname, version_cmp, version, default_release, cache._cache)
version_is_installed = False
version_installable = None
if package_is_installed:
try:
installed_version = pkg.installed.version
except AttributeError:
installed_version = pkg.installedVersion
if version_cmp == "=":
# check if the version is matched as well
version_is_installed = fnmatch.fnmatch(installed_version, version)
if version_best and installed_version != version_best and fnmatch.fnmatch(version_best, version):
version_installable = version_best
elif version_cmp == ">=":
version_is_installed = apt_pkg.version_compare(installed_version, version) >= 0
if version_best and installed_version != version_best and apt_pkg.version_compare(version_best, version) >= 0:
version_installable = version_best
else:
version_is_installed = True
if version_best and installed_version != version_best:
version_installable = version_best
else:
version_installable = version_best
return package_is_installed, version_is_installed, version_installable, has_files
def expand_dpkg_options(dpkg_options_compressed):
options_list = dpkg_options_compressed.split(',')
dpkg_options = ""
for dpkg_option in options_list:
dpkg_options = '%s -o "Dpkg::Options::=--%s"' \
% (dpkg_options, dpkg_option)
return dpkg_options.strip()
def expand_pkgspec_from_fnmatches(m, pkgspec, cache):
# Note: apt-get does implicit regex matching when an exact package name
# match is not found. Something like this:
# matches = [pkg.name for pkg in cache if re.match(pkgspec, pkg.name)]
# (Should also deal with the ':' for multiarch like the fnmatch code below)
#
# We have decided not to do similar implicit regex matching but might take
# a PR to add some sort of explicit regex matching:
# https://github.com/ansible/ansible-modules-core/issues/1258
new_pkgspec = []
if pkgspec:
for pkgspec_pattern in pkgspec:
pkgname_pattern, version_cmp, version = package_split(pkgspec_pattern)
# note that none of these chars is allowed in a (debian) pkgname
if frozenset('*?[]!').intersection(pkgname_pattern):
# handle multiarch pkgnames, the idea is that "apt*" should
# only select native packages. But "apt*:i386" should still work
if ":" not in pkgname_pattern:
# Filter the multiarch packages from the cache only once
try:
pkg_name_cache = _non_multiarch # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _non_multiarch = [pkg.name for pkg in cache if ':' not in pkg.name] # noqa: F841
else:
# Create a cache of pkg_names including multiarch only once
try:
pkg_name_cache = _all_pkg_names # pylint: disable=used-before-assignment
except NameError:
pkg_name_cache = _all_pkg_names = [pkg.name for pkg in cache] # noqa: F841
matches = fnmatch.filter(pkg_name_cache, pkgname_pattern)
if not matches:
m.fail_json(msg="No package(s) matching '%s' available" % str(pkgname_pattern))
else:
new_pkgspec.extend(matches)
else:
# No wildcards in name
new_pkgspec.append(pkgspec_pattern)
return new_pkgspec
def parse_diff(output):
diff = to_native(output).splitlines()
try:
# check for start marker from aptitude
diff_start = diff.index('Resolving dependencies...')
except ValueError:
try:
# check for start marker from apt-get
diff_start = diff.index('Reading state information...')
except ValueError:
# show everything
diff_start = -1
try:
# check for end marker line from both apt-get and aptitude
diff_end = next(i for i, item in enumerate(diff) if re.match('[0-9]+ (packages )?upgraded', item))
except StopIteration:
diff_end = len(diff)
diff_start += 1
diff_end += 1
return {'prepared': '\n'.join(diff[diff_start:diff_end])}
def mark_installed_manually(m, packages):
if not packages:
return
apt_mark_cmd_path = m.get_bin_path("apt-mark")
# https://github.com/ansible/ansible/issues/40531
if apt_mark_cmd_path is None:
m.warn("Could not find apt-mark binary, not marking package(s) as manually installed.")
return
cmd = "%s manual %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if APT_MARK_INVALID_OP in err or APT_MARK_INVALID_OP_DEB6 in err:
cmd = "%s unmarkauto %s" % (apt_mark_cmd_path, ' '.join(packages))
rc, out, err = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
def install(m, pkgspec, cache, upgrade=False, default_release=None,
install_recommends=None, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS),
build_dep=False, fixed=False, autoremove=False, fail_on_autoremove=False, only_upgrade=False,
allow_unauthenticated=False, allow_downgrade=False, allow_change_held_packages=False):
pkg_list = []
packages = ""
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
package_names = []
for package in pkgspec:
if build_dep:
# Let apt decide what to install
pkg_list.append("'%s'" % package)
continue
name, version_cmp, version = package_split(package)
package_names.append(name)
installed, installed_version, version_installable, has_files = package_status(m, name, version_cmp, version, default_release, cache, state='install')
if (not installed_version and not version_installable) or (not installed and only_upgrade):
status = False
data = dict(msg="no available installation candidate for %s" % package)
return (status, data)
if version_installable and ((not installed and not only_upgrade) or upgrade or not installed_version):
if version_installable is not True:
pkg_list.append("'%s=%s'" % (name, version_installable))
elif version:
pkg_list.append("'%s=%s'" % (name, version))
else:
pkg_list.append("'%s'" % name)
elif installed_version and version_installable and version_cmp == "=":
# This happens when the package is installed, a newer version is
# available, and the version is a wildcard that matches both
#
# This is legacy behavior, and isn't documented (in fact it does
# things documentations says it shouldn't). It should not be relied
# upon.
pkg_list.append("'%s=%s'" % (name, version))
packages = ' '.join(pkg_list)
if packages:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
if only_upgrade:
only_upgrade = '--only-upgrade'
else:
only_upgrade = ''
if fixed:
fixed = '--fix-broken'
else:
fixed = ''
if build_dep:
cmd = "%s -y %s %s %s %s %s %s build-dep %s" % (APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, fail_on_autoremove, check_arg, packages)
else:
cmd = "%s -y %s %s %s %s %s %s %s install %s" % \
(APT_GET_CMD, dpkg_options, only_upgrade, fixed, force_yes, autoremove, fail_on_autoremove, check_arg, packages)
if default_release:
cmd += " -t '%s'" % (default_release,)
if install_recommends is False:
cmd += " -o APT::Install-Recommends=no"
elif install_recommends is True:
cmd += " -o APT::Install-Recommends=yes"
# install_recommends is None uses the OS default
if allow_unauthenticated:
cmd += " --allow-unauthenticated"
if allow_downgrade:
cmd += " --allow-downgrades"
if allow_change_held_packages:
cmd += " --allow-change-held-packages"
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
status = True
changed = True
if build_dep:
changed = APT_GET_ZERO not in out
data = dict(changed=changed, stdout=out, stderr=err, diff=diff)
if rc:
status = False
data = dict(msg="'%s' failed: %s" % (cmd, err), stdout=out, stderr=err, rc=rc)
else:
status = True
data = dict(changed=False)
if not build_dep and not m.check_mode:
mark_installed_manually(m, package_names)
return (status, data)
def get_field_of_deb(m, deb_file, field="Version"):
cmd_dpkg = m.get_bin_path("dpkg", True)
cmd = cmd_dpkg + " --field %s %s" % (deb_file, field)
rc, stdout, stderr = m.run_command(cmd)
if rc != 0:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
return to_native(stdout).strip('\n')
def install_deb(
m, debs, cache, force, fail_on_autoremove, install_recommends,
allow_unauthenticated,
allow_downgrade,
allow_change_held_packages,
dpkg_options,
):
changed = False
deps_to_install = []
pkgs_to_install = []
for deb_file in debs.split(','):
try:
pkg = apt.debfile.DebPackage(deb_file, cache=apt.Cache())
pkg_name = get_field_of_deb(m, deb_file, "Package")
pkg_version = get_field_of_deb(m, deb_file, "Version")
if hasattr(apt_pkg, 'get_architectures') and len(apt_pkg.get_architectures()) > 1:
pkg_arch = get_field_of_deb(m, deb_file, "Architecture")
pkg_key = "%s:%s" % (pkg_name, pkg_arch)
else:
pkg_key = pkg_name
try:
installed_pkg = apt.Cache()[pkg_key]
installed_version = installed_pkg.installed.version
if package_version_compare(pkg_version, installed_version) == 0:
# Does not need to down-/upgrade, move on to next package
continue
except Exception:
# Must not be installed, continue with installation
pass
# Check if package is installable
if not pkg.check():
if force or ("later version" in pkg._failure_string and allow_downgrade):
pass
else:
m.fail_json(msg=pkg._failure_string)
# add any missing deps to the list of deps we need
# to install so they're all done in one shot
deps_to_install.extend(pkg.missing_deps)
except Exception as e:
m.fail_json(msg="Unable to install package: %s" % to_native(e))
# and add this deb to the list of packages to install
pkgs_to_install.append(deb_file)
# install the deps through apt
retvals = {}
if deps_to_install:
(success, retvals) = install(m=m, pkgspec=deps_to_install, cache=cache,
install_recommends=install_recommends,
fail_on_autoremove=fail_on_autoremove,
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
dpkg_options=expand_dpkg_options(dpkg_options))
if not success:
m.fail_json(**retvals)
changed = retvals.get('changed', False)
if pkgs_to_install:
options = ' '.join(["--%s" % x for x in dpkg_options.split(",")])
if m.check_mode:
options += " --simulate"
if force:
options += " --force-all"
cmd = "dpkg %s -i %s" % (options, " ".join(pkgs_to_install))
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if "stdout" in retvals:
stdout = retvals["stdout"] + out
else:
stdout = out
if "diff" in retvals:
diff = retvals["diff"]
if 'prepared' in diff:
diff['prepared'] += '\n\n' + out
else:
diff = parse_diff(out)
if "stderr" in retvals:
stderr = retvals["stderr"] + err
else:
stderr = err
if rc == 0:
m.exit_json(changed=True, stdout=stdout, stderr=stderr, diff=diff)
else:
m.fail_json(msg="%s failed" % cmd, stdout=stdout, stderr=stderr)
else:
m.exit_json(changed=changed, stdout=retvals.get('stdout', ''), stderr=retvals.get('stderr', ''), diff=retvals.get('diff', ''))
def remove(m, pkgspec, cache, purge=False, force=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False):
pkg_list = []
pkgspec = expand_pkgspec_from_fnmatches(m, pkgspec, cache)
for package in pkgspec:
name, version_cmp, version = package_split(package)
installed, installed_version, upgradable, has_files = package_status(m, name, version_cmp, version, None, cache, state='remove')
if installed_version or (has_files and purge):
pkg_list.append("'%s'" % package)
packages = ' '.join(pkg_list)
if not packages:
m.exit_json(changed=False)
else:
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -q -y %s %s %s %s %s remove %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, autoremove, check_arg, packages)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get remove %s' failed: %s" % (packages, err), stdout=out, stderr=err, rc=rc)
m.exit_json(changed=True, stdout=out, stderr=err, diff=diff)
def cleanup(m, purge=False, force=False, operation=None,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS)):
if operation not in frozenset(['autoremove', 'autoclean']):
raise AssertionError('Expected "autoremove" or "autoclean" cleanup operation, got %s' % operation)
if force:
force_yes = '--force-yes'
else:
force_yes = ''
if purge:
purge = '--purge'
else:
purge = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
cmd = "%s -y %s %s %s %s %s" % (APT_GET_CMD, dpkg_options, purge, force_yes, operation, check_arg)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'apt-get %s' failed: %s" % (operation, err), stdout=out, stderr=err, rc=rc)
changed = CLEAN_OP_CHANGED_STR[operation] in out
m.exit_json(changed=changed, stdout=out, stderr=err, diff=diff)
def aptclean(m):
clean_rc, clean_out, clean_err = m.run_command(['apt-get', 'clean'])
if m._diff:
clean_diff = parse_diff(clean_out)
else:
clean_diff = {}
if clean_rc:
m.fail_json(msg="apt-get clean failed", stdout=clean_out, rc=clean_rc)
if clean_err:
m.fail_json(msg="apt-get clean failed: %s" % clean_err, stdout=clean_out, rc=clean_rc)
return clean_out, clean_err
def upgrade(m, mode="yes", force=False, default_release=None,
use_apt_get=False,
dpkg_options=expand_dpkg_options(DPKG_OPTIONS), autoremove=False, fail_on_autoremove=False,
allow_unauthenticated=False,
allow_downgrade=False,
):
if autoremove:
autoremove = '--auto-remove'
else:
autoremove = ''
if m.check_mode:
check_arg = '--simulate'
else:
check_arg = ''
apt_cmd = None
prompt_regex = None
if mode == "dist" or (mode == "full" and use_apt_get):
# apt-get dist-upgrade
apt_cmd = APT_GET_CMD
upgrade_command = "dist-upgrade %s" % (autoremove)
elif mode == "full" and not use_apt_get:
# aptitude full-upgrade
apt_cmd = APTITUDE_CMD
upgrade_command = "full-upgrade"
else:
if use_apt_get:
apt_cmd = APT_GET_CMD
upgrade_command = "upgrade --with-new-pkgs %s" % (autoremove)
else:
# aptitude safe-upgrade # mode=yes # default
apt_cmd = APTITUDE_CMD
upgrade_command = "safe-upgrade"
prompt_regex = r"(^Do you want to ignore this warning and proceed anyway\?|^\*\*\*.*\[default=.*\])"
if force:
if apt_cmd == APT_GET_CMD:
force_yes = '--force-yes'
else:
force_yes = '--assume-yes --allow-untrusted'
else:
force_yes = ''
if fail_on_autoremove:
fail_on_autoremove = '--no-remove'
else:
fail_on_autoremove = ''
allow_unauthenticated = '--allow-unauthenticated' if allow_unauthenticated else ''
allow_downgrade = '--allow-downgrades' if allow_downgrade else ''
if apt_cmd is None:
if use_apt_get:
apt_cmd = APT_GET_CMD
else:
m.fail_json(msg="Unable to find APTITUDE in path. Please make sure "
"to have APTITUDE in path or use 'force_apt_get=True'")
apt_cmd_path = m.get_bin_path(apt_cmd, required=True)
cmd = '%s -y %s %s %s %s %s %s %s' % (
apt_cmd_path,
dpkg_options,
force_yes,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade,
check_arg,
upgrade_command,
)
if default_release:
cmd += " -t '%s'" % (default_release,)
with PolicyRcD(m):
rc, out, err = m.run_command(cmd, prompt_regex=prompt_regex)
if m._diff:
diff = parse_diff(out)
else:
diff = {}
if rc:
m.fail_json(msg="'%s %s' failed: %s" % (apt_cmd, upgrade_command, err), stdout=out, rc=rc)
if (apt_cmd == APT_GET_CMD and APT_GET_ZERO in out) or (apt_cmd == APTITUDE_CMD and APTITUDE_ZERO in out):
m.exit_json(changed=False, msg=out, stdout=out, stderr=err)
m.exit_json(changed=True, msg=out, stdout=out, stderr=err, diff=diff)
def get_cache_mtime():
"""Return mtime of a valid apt cache file.
Stat the apt cache file and if no cache file is found return 0
:returns: ``int``
"""
cache_time = 0
if os.path.exists(APT_UPDATE_SUCCESS_STAMP_PATH):
cache_time = os.stat(APT_UPDATE_SUCCESS_STAMP_PATH).st_mtime
elif os.path.exists(APT_LISTS_PATH):
cache_time = os.stat(APT_LISTS_PATH).st_mtime
return cache_time
def get_updated_cache_time():
"""Return the mtime time stamp and the updated cache time.
Always retrieve the mtime of the apt cache or set the `cache_mtime`
variable to 0
:returns: ``tuple``
"""
cache_mtime = get_cache_mtime()
mtimestamp = datetime.datetime.fromtimestamp(cache_mtime)
updated_cache_time = int(time.mktime(mtimestamp.timetuple()))
return mtimestamp, updated_cache_time
# https://github.com/ansible/ansible-modules-core/issues/2951
def get_cache(module):
'''Attempt to get the cache object and update till it works'''
cache = None
try:
cache = apt.Cache()
except SystemError as e:
if '/var/lib/apt/lists/' in to_native(e).lower():
# update cache until files are fixed or retries exceeded
retries = 0
while retries < 2:
(rc, so, se) = module.run_command(['apt-get', 'update', '-q'])
retries += 1
if rc == 0:
break
if rc != 0:
module.fail_json(msg='Updating the cache to correct corrupt package lists failed:\n%s\n%s' % (to_native(e), so + se), rc=rc)
# try again
cache = apt.Cache()
else:
module.fail_json(msg=to_native(e))
return cache
def main():
module = AnsibleModule(
argument_spec=dict(
state=dict(type='str', default='present', choices=['absent', 'build-dep', 'fixed', 'latest', 'present']),
update_cache=dict(type='bool', aliases=['update-cache']),
update_cache_retries=dict(type='int', default=5),
update_cache_retry_max_delay=dict(type='int', default=12),
cache_valid_time=dict(type='int', default=0),
purge=dict(type='bool', default=False),
package=dict(type='list', elements='str', aliases=['pkg', 'name']),
deb=dict(type='path'),
default_release=dict(type='str', aliases=['default-release']),
install_recommends=dict(type='bool', aliases=['install-recommends']),
force=dict(type='bool', default=False),
upgrade=dict(type='str', choices=['dist', 'full', 'no', 'safe', 'yes'], default='no'),
dpkg_options=dict(type='str', default=DPKG_OPTIONS),
autoremove=dict(type='bool', default=False),
autoclean=dict(type='bool', default=False),
fail_on_autoremove=dict(type='bool', default=False),
policy_rc_d=dict(type='int', default=None),
only_upgrade=dict(type='bool', default=False),
force_apt_get=dict(type='bool', default=False),
clean=dict(type='bool', default=False),
allow_unauthenticated=dict(type='bool', default=False, aliases=['allow-unauthenticated']),
allow_downgrade=dict(type='bool', default=False, aliases=['allow-downgrade', 'allow_downgrades', 'allow-downgrades']),
allow_change_held_packages=dict(type='bool', default=False),
lock_timeout=dict(type='int', default=60),
),
mutually_exclusive=[['deb', 'package', 'upgrade']],
required_one_of=[['autoremove', 'deb', 'package', 'update_cache', 'upgrade']],
supports_check_mode=True,
)
# We screenscrape apt-get and aptitude output for information so we need
# to make sure we use the best parsable locale when running commands
# also set apt specific vars for desired behaviour
locale = get_best_parsable_locale(module)
# APT related constants
APT_ENV_VARS = dict(
DEBIAN_FRONTEND='noninteractive',
DEBIAN_PRIORITY='critical',
LANG=locale,
LC_ALL=locale,
LC_MESSAGES=locale,
LC_CTYPE=locale,
)
module.run_command_environ_update = APT_ENV_VARS
if not HAS_PYTHON_APT:
# This interpreter can't see the apt Python library- we'll do the following to try and fix that:
# 1) look in common locations for system-owned interpreters that can see it; if we find one, respawn under it
# 2) finding none, try to install a matching python-apt package for the current interpreter version;
# we limit to the current interpreter version to try and avoid installing a whole other Python just
# for apt support
# 3) if we installed a support package, try to respawn under what we think is the right interpreter (could be
# the current interpreter again, but we'll let it respawn anyway for simplicity)
# 4) if still not working, return an error and give up (some corner cases not covered, but this shouldn't be
# made any more complex than it already is to try and cover more, eg, custom interpreters taking over
# system locations)
apt_pkg_name = 'python3-apt' if PY3 else 'python-apt'
if has_respawned():
# this shouldn't be possible; short-circuit early if it happens...
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
interpreters = ['/usr/bin/python3', '/usr/bin/python2', '/usr/bin/python']
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
# don't make changes if we're in check_mode
if module.check_mode:
module.fail_json(msg="%s must be installed to use check mode. "
"If run normally this module can auto-install it." % apt_pkg_name)
# We skip cache update in auto install the dependency if the
# user explicitly declared it with update_cache=no.
if module.params.get('update_cache') is False:
module.warn("Auto-installing missing dependency without updating cache: %s" % apt_pkg_name)
else:
module.warn("Updating cache and auto-installing missing dependency: %s" % apt_pkg_name)
module.run_command(['apt-get', 'update'], check_rc=True)
# try to install the apt Python binding
module.run_command(['apt-get', 'install', '--no-install-recommends', apt_pkg_name, '-y', '-q'], check_rc=True)
# try again to find the bindings in common places
interpreter = probe_interpreters_for_module(interpreters, 'apt')
if interpreter:
# found the Python bindings; respawn this module under the interpreter where we found them
# NB: respawn is somewhat wasteful if it's this interpreter, but simplifies the code
respawn_module(interpreter)
# this is the end of the line for this process, it will exit here once the respawned module has completed
else:
# we've done all we can do; just tell the user it's busted and get out
module.fail_json(msg="{0} must be installed and visible from {1}.".format(apt_pkg_name, sys.executable))
global APTITUDE_CMD
APTITUDE_CMD = module.get_bin_path("aptitude", False)
global APT_GET_CMD
APT_GET_CMD = module.get_bin_path("apt-get")
p = module.params
if p['clean'] is True:
aptclean_stdout, aptclean_stderr = aptclean(module)
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=True,
msg=aptclean_stdout,
stdout=aptclean_stdout,
stderr=aptclean_stderr
)
if p['upgrade'] == 'no':
p['upgrade'] = None
use_apt_get = p['force_apt_get']
if not use_apt_get and not APTITUDE_CMD:
use_apt_get = True
updated_cache = False
updated_cache_time = 0
install_recommends = p['install_recommends']
allow_unauthenticated = p['allow_unauthenticated']
allow_downgrade = p['allow_downgrade']
allow_change_held_packages = p['allow_change_held_packages']
dpkg_options = expand_dpkg_options(p['dpkg_options'])
autoremove = p['autoremove']
fail_on_autoremove = p['fail_on_autoremove']
autoclean = p['autoclean']
# max times we'll retry
deadline = time.time() + p['lock_timeout']
# keep running on lock issues unless timeout or resolution is hit.
while True:
# Get the cache object, this has 3 retries built in
cache = get_cache(module)
try:
if p['default_release']:
try:
apt_pkg.config['APT::Default-Release'] = p['default_release']
except AttributeError:
apt_pkg.Config['APT::Default-Release'] = p['default_release']
# reopen cache w/ modified config
cache.open(progress=None)
mtimestamp, updated_cache_time = get_updated_cache_time()
# Cache valid time is default 0, which will update the cache if
# needed and `update_cache` was set to true
updated_cache = False
if p['update_cache'] or p['cache_valid_time']:
now = datetime.datetime.now()
tdelta = datetime.timedelta(seconds=p['cache_valid_time'])
if not mtimestamp + tdelta >= now:
# Retry to update the cache with exponential backoff
err = ''
update_cache_retries = module.params.get('update_cache_retries')
update_cache_retry_max_delay = module.params.get('update_cache_retry_max_delay')
randomize = random.randint(0, 1000) / 1000.0
for retry in range(update_cache_retries):
try:
if not module.check_mode:
cache.update()
break
except apt.cache.FetchFailedException as e:
err = to_native(e)
# Use exponential backoff plus a little bit of randomness
delay = 2 ** retry + randomize
if delay > update_cache_retry_max_delay:
delay = update_cache_retry_max_delay + randomize
time.sleep(delay)
else:
module.fail_json(msg='Failed to update apt cache: %s' % (err if err else 'unknown reason'))
cache.open(progress=None)
mtimestamp, post_cache_update_time = get_updated_cache_time()
if module.check_mode or updated_cache_time != post_cache_update_time:
updated_cache = True
updated_cache_time = post_cache_update_time
# If there is nothing else to do exit. This will set state as
# changed based on if the cache was updated.
if not p['package'] and not p['upgrade'] and not p['deb']:
module.exit_json(
changed=updated_cache,
cache_updated=updated_cache,
cache_update_time=updated_cache_time
)
force_yes = p['force']
if p['upgrade']:
upgrade(
module,
p['upgrade'],
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if p['deb']:
if p['state'] != 'present':
module.fail_json(msg="deb only supports state=present")
if '://' in p['deb']:
p['deb'] = fetch_file(module, p['deb'])
install_deb(module, p['deb'], cache,
install_recommends=install_recommends,
allow_unauthenticated=allow_unauthenticated,
allow_change_held_packages=allow_change_held_packages,
allow_downgrade=allow_downgrade,
force=force_yes, fail_on_autoremove=fail_on_autoremove, dpkg_options=p['dpkg_options'])
unfiltered_packages = p['package'] or ()
packages = [package.strip() for package in unfiltered_packages if package != '*']
all_installed = '*' in unfiltered_packages
latest = p['state'] == 'latest'
if latest and all_installed:
if packages:
module.fail_json(msg='unable to install additional packages when upgrading all installed packages')
upgrade(
module,
'yes',
force_yes,
p['default_release'],
use_apt_get,
dpkg_options,
autoremove,
fail_on_autoremove,
allow_unauthenticated,
allow_downgrade
)
if packages:
for package in packages:
if package.count('=') > 1:
module.fail_json(msg="invalid package spec: %s" % package)
if not packages:
if autoclean:
cleanup(module, p['purge'], force=force_yes, operation='autoclean', dpkg_options=dpkg_options)
if autoremove:
cleanup(module, p['purge'], force=force_yes, operation='autoremove', dpkg_options=dpkg_options)
if p['state'] in ('latest', 'present', 'build-dep', 'fixed'):
state_upgrade = False
state_builddep = False
state_fixed = False
if p['state'] == 'latest':
state_upgrade = True
if p['state'] == 'build-dep':
state_builddep = True
if p['state'] == 'fixed':
state_fixed = True
success, retvals = install(
module,
packages,
cache,
upgrade=state_upgrade,
default_release=p['default_release'],
install_recommends=install_recommends,
force=force_yes,
dpkg_options=dpkg_options,
build_dep=state_builddep,
fixed=state_fixed,
autoremove=autoremove,
fail_on_autoremove=fail_on_autoremove,
only_upgrade=p['only_upgrade'],
allow_unauthenticated=allow_unauthenticated,
allow_downgrade=allow_downgrade,
allow_change_held_packages=allow_change_held_packages,
)
# Store if the cache has been updated
retvals['cache_updated'] = updated_cache
# Store when the update time was last
retvals['cache_update_time'] = updated_cache_time
if success:
module.exit_json(**retvals)
else:
module.fail_json(**retvals)
elif p['state'] == 'absent':
remove(module, packages, cache, p['purge'], force=force_yes, dpkg_options=dpkg_options, autoremove=autoremove)
except apt.cache.LockFailedException as lockFailedException:
if time.time() < deadline:
continue
module.fail_json(msg="Failed to lock apt for exclusive operation: %s" % lockFailedException)
except apt.cache.FetchFailedException as fetchFailedException:
module.fail_json(msg="Could not fetch updated apt files: %s" % fetchFailedException)
# got here w/o exception and/or exit???
module.fail_json(msg='Unexpected code path taken, we really should have exited before, this is a bug')
if __name__ == "__main__":
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,762 |
core 2.13.4 breaking change in apt only_upgrade
|
### Summary
Using `apt` with `only_upgrade: yes` fails if a package is currently not installed. It seems this was recently introduced with https://github.com/ansible/ansible/pull/78327. For us this was a breaking change - was this intentionally?
### Issue Type
Bug Report
### Component Name
apt
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/vagrant/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/vagrant/.local/lib/python3.8/site-packages/ansible
ansible collection location = /home/vagrant/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.10 (default, Jun 22 2022, 20:18:18) [GCC 9.4.0]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
empty for me
```
### OS / Environment
ubuntu/focal64 20220804.0.0 (vagrant box)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- name: upgrade if installed
apt:
name: "{{ packages }}"
only_upgrade: yes
vars:
packages:
- foo # not installed
- bar
```
### Expected Results
no failure if a package is currently not installed
### Actual Results
```console
TASK [my_role : upgrade if installed.] ***********************************************************************************************************************
fatal: [127.0.0.1]: FAILED! => {"cache_update_time": 1663062238, "cache_updated": false, "changed": false, "msg": "no available installation candidate for foo"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78762
|
https://github.com/ansible/ansible/pull/78781
|
9bc4fa496ca06d21b347071078b0f12343481e07
|
4b45b4b09d9257006f7b23237293c8c1a04521d8
| 2022-09-13T16:11:10Z |
python
| 2022-09-15T19:42:34Z |
test/integration/targets/apt/tasks/repo.yml
|
- block:
- name: Install foo package version 1.0.0
apt:
name: foo=1.0.0
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.0' in dpkg_result.stdout"
- name: Update to foo version 1.0.1
apt:
name: foo
state: latest
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.1' in dpkg_result.stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- name: Try to install non-existent version
apt:
name: foo=99
state: present
ignore_errors: true
register: apt_result
- name: Check if install failed
assert:
that:
- apt_result is failed
# https://github.com/ansible/ansible/issues/30638
- block:
- name: Fail to install foo=1.0.1 since foo is not installed and only_upgrade is set
apt:
name: foo=1.0.1
state: present
only_upgrade: yes
allow_unauthenticated: yes
ignore_errors: yes
register: apt_result
- name: Check that foo was not upgraded
assert:
that:
- "apt_result is not changed"
- "apt_result is failed"
- apt:
name: foo=1.0.0
allow_unauthenticated: yes
- name: Upgrade foo to 1.0.1
apt:
name: foo=1.0.1
state: present
only_upgrade: yes
allow_unauthenticated: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.1' in dpkg_result.stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- block:
- name: Install foo=1.0.0
apt:
name: foo=1.0.0
- name: Run version test matrix
apt:
name: foo{{ item.0 }}
default_release: '{{ item.1 }}'
state: '{{ item.2 | ternary("latest","present") }}'
check_mode: true
register: apt_result
loop:
# [filter, release, state_latest, expected]
- ["", null, false, null]
- ["", null, true, "1.0.1"]
- ["=1.0.0", null, false, null]
- ["=1.0.0", null, true, null]
- ["=1.0.1", null, false, "1.0.1"]
#- ["=1.0.*", null, false, null] # legacy behavior. should not upgrade without state=latest
- ["=1.0.*", null, true, "1.0.1"]
- [">=1.0.0", null, false, null]
- [">=1.0.0", null, true, "1.0.1"]
- [">=1.0.1", null, false, "1.0.1"]
- ["", "testing", false, null]
- ["", "testing", true, "2.0.1"]
- ["=2.0.0", null, false, "2.0.0"]
- [">=2.0.0", "testing", false, "2.0.1"]
- name: Validate version test matrix
assert:
that:
- (item.item.3 is not none) == (item.stdout is defined)
- item.item.3 is none or "Inst foo [1.0.0] (" + item.item.3 + " localhost [all])" in item.stdout_lines
loop: '{{ apt_result.results }}'
- name: Pin foo=1.0.0
copy:
content: |-
Package: foo
Pin: version 1.0.0
Pin-Priority: 1000
dest: /etc/apt/preferences.d/foo
- name: Run pinning version test matrix
apt:
name: foo{{ item.0 }}
default_release: '{{ item.1 }}'
state: '{{ item.2 | ternary("latest","present") }}'
check_mode: true
ignore_errors: true
register: apt_result
loop:
# [filter, release, state_latest, expected] # expected=null for no change. expected=False to assert an error
- ["", null, false, null]
- ["", null, true, null]
- ["=1.0.0", null, false, null]
- ["=1.0.0", null, true, null]
- ["=1.0.1", null, false, "1.0.1"]
#- ["=1.0.*", null, false, null] # legacy behavior. should not upgrade without state=latest
- ["=1.0.*", null, true, "1.0.1"]
- [">=1.0.0", null, false, null]
- [">=1.0.0", null, true, null]
- [">=1.0.1", null, false, False]
- ["", "testing", false, null]
- ["", "testing", true, null]
- ["=2.0.0", null, false, "2.0.0"]
- [">=2.0.0", "testing", false, False]
- name: Validate pinning version test matrix
assert:
that:
- (item.item.3 != False) or (item.item.3 == False and item is failed)
- (item.item.3 is string) == (item.stdout is defined)
- item.item.3 is not string or "Inst foo [1.0.0] (" + item.item.3 + " localhost [all])" in item.stdout_lines
loop: '{{ apt_result.results }}'
always:
- name: Uninstall foo
apt:
name: foo
state: absent
- name: Unpin foo
file:
path: /etc/apt/preferences.d/foo
state: absent
# https://github.com/ansible/ansible/issues/35900
- block:
- name: Disable ubuntu repos so system packages are not upgraded and do not change testing env
command: mv /etc/apt/sources.list /etc/apt/sources.list.backup
- name: Install foobar, installs foo as a dependency
apt:
name: foobar=1.0.0
allow_unauthenticated: yes
- name: mark foobar as auto for next test
shell: apt-mark auto foobar
- name: Install foobar (marked as manual) (check mode)
apt:
name: foobar=1.0.1
allow_unauthenticated: yes
check_mode: yes
register: manual_foobar_install_check_mode
- name: check foobar was not marked as manually installed by check mode
shell: apt-mark showmanual | grep foobar
ignore_errors: yes
register: showmanual
- assert:
that:
- manual_foobar_install_check_mode.changed
- "'foobar' not in showmanual.stdout"
- name: Install foobar (marked as manual)
apt:
name: foobar=1.0.1
allow_unauthenticated: yes
register: manual_foobar_install
- name: check foobar was marked as manually installed
shell: apt-mark showmanual | grep foobar
ignore_errors: yes
register: showmanual
- assert:
that:
- manual_foobar_install.changed
- "'foobar' in showmanual.stdout"
- name: Upgrade foobar to a version which does not depend on foo, autoremove should remove foo
apt:
upgrade: dist
autoremove: yes
allow_unauthenticated: yes
- name: Check foo with dpkg
shell: dpkg-query -l foo
register: dpkg_result
ignore_errors: yes
- name: Check that foo was removed by autoremove
assert:
that:
- "dpkg_result is failed"
always:
- name: Clean up
apt:
pkg: foo,foobar
state: absent
autoclean: yes
- name: Restore ubuntu repos
command: mv /etc/apt/sources.list.backup /etc/apt/sources.list
# https://github.com/ansible/ansible/issues/26298
- block:
- name: Disable ubuntu repos so system packages are not upgraded and do not change testing env
command: mv /etc/apt/sources.list /etc/apt/sources.list.backup
- name: Install foobar, installs foo as a dependency
apt:
name: foobar=1.0.0
allow_unauthenticated: yes
- name: Upgrade foobar to a version which does not depend on foo
apt:
upgrade: dist
force: yes # workaround for --allow-unauthenticated used along with upgrade
- name: autoremove should remove foo
apt:
autoremove: yes
register: autoremove_result
- name: Check that autoremove correctly reports changed=True
assert:
that:
- "autoremove_result is changed"
- name: Check foo with dpkg
shell: dpkg-query -l foo
register: dpkg_result
ignore_errors: yes
- name: Check that foo was removed by autoremove
assert:
that:
- "dpkg_result is failed"
- name: Nothing to autoremove
apt:
autoremove: yes
register: autoremove_result
- name: Check that autoremove correctly reports changed=False
assert:
that:
- "autoremove_result is not changed"
- name: Create a fake .deb file for autoclean to remove
file:
name: /var/cache/apt/archives/python3-q_2.4-1_all.deb
state: touch
- name: autoclean fake .deb file
apt:
autoclean: yes
register: autoclean_result
- name: Check if the .deb file exists
stat:
path: /var/cache/apt/archives/python3-q_2.4-1_all.deb
register: stat_result
- name: Check that autoclean correctly reports changed=True and file was removed
assert:
that:
- "autoclean_result is changed"
- "not stat_result.stat.exists"
- name: Nothing to autoclean
apt:
autoclean: yes
register: autoclean_result
- name: Check that autoclean correctly reports changed=False
assert:
that:
- "autoclean_result is not changed"
always:
- name: Clean up
apt:
pkg: foo,foobar
state: absent
autoclean: yes
- name: Restore ubuntu repos
command: mv /etc/apt/sources.list.backup /etc/apt/sources.list
- name: Downgrades
import_tasks: "downgrade.yml"
- name: Upgrades
block:
- import_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ True | bool }}"
upgrade_type: "dist"
force_apt_get: "{{ False | bool }}"
- name: Check if aptitude is installed
command: dpkg-query --show --showformat='${db:Status-Abbrev}' aptitude
register: aptitude_status
- name: Remove aptitude, if installed, to test fall-back to apt-get
apt:
pkg: aptitude
state: absent
when:
- aptitude_status.stdout.find('ii') != -1
- include_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ False | bool }}"
upgrade_type: "{{ item.upgrade_type }}"
force_apt_get: "{{ item.force_apt_get }}"
with_items:
- { upgrade_type: safe, force_apt_get: False }
- { upgrade_type: full, force_apt_get: False }
- { upgrade_type: safe, force_apt_get: True }
- { upgrade_type: full, force_apt_get: True }
- name: (Re-)Install aptitude, run same tests again
apt:
pkg: aptitude
state: present
- include_tasks: "upgrade.yml"
vars:
aptitude_present: "{{ True | bool }}"
upgrade_type: "{{ item.upgrade_type }}"
force_apt_get: "{{ item.force_apt_get }}"
with_items:
- { upgrade_type: safe, force_apt_get: False }
- { upgrade_type: full, force_apt_get: False }
- { upgrade_type: safe, force_apt_get: True }
- { upgrade_type: full, force_apt_get: True }
- name: Remove aptitude if not originally present
apt:
pkg: aptitude
state: absent
when:
- aptitude_status.stdout.find('ii') == -1
- block:
- name: Install the foo package with diff=yes
apt:
name: foo
allow_unauthenticated: yes
diff: yes
register: apt_result
- name: Check the content of diff.prepared
assert:
that:
- apt_result is success
- "'The following NEW packages will be installed:\n foo' in apt_result.diff.prepared"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
- block:
- name: Install foo package version 1.0.0 with force=yes, implies allow_unauthenticated=yes
apt:
name: foo=1.0.0
force: yes
register: apt_result
- name: Check install with dpkg
shell: dpkg-query -l foo
register: dpkg_result
- name: Check if install was successful
assert:
that:
- "apt_result is success"
- "dpkg_result is success"
- "'1.0.0' in dpkg_result.stdout"
always:
- name: Clean up
apt:
name: foo
state: absent
allow_unauthenticated: yes
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,786 |
Cannot show uri test docs
|
### Summary
Running `ansible-doc -t test uri` produces
```
ERROR! test ansible.builtin.uri missing documentation (or could not parse documentation): No documentation availalbe for ansible.builtin.uri (/path/to/ansible/lib/ansible/plugins/test/uri.py)
```
It tries to look at `uri.py` instead of `uri.yml` in the same directory.
Listing the tests shows that filter correctly, `--metadata-dump` does not.
This happens both with #78700 and without #78700. #77737 also demonstrates this.
### Issue Type
Bug Report
### Component Name
ansible-doc
### Ansible Version
```console
devel branch
#78700
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
-
### Expected Results
-
### Actual Results
```console
-
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78786
|
https://github.com/ansible/ansible/pull/78788
|
1d410ca700a468723be2bf76b142dc7be66401fc
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
| 2022-09-15T18:22:01Z |
python
| 2022-09-19T15:50:27Z |
lib/ansible/utils/plugin_docs.py
|
# Copyright: (c) 2012, Jan-Piet Mens <jpmens () gmail.com>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from collections.abc import MutableMapping, MutableSet, MutableSequence
from pathlib import Path
from ansible import constants as C
from ansible.release import __version__ as ansible_version
from ansible.errors import AnsibleError, AnsibleParserError, AnsiblePluginNotFound
from ansible.module_utils.six import string_types
from ansible.module_utils._text import to_native
from ansible.parsing.plugin_docs import read_docstring
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.utils.display import Display
display = Display()
def merge_fragment(target, source):
for key, value in source.items():
if key in target:
# assumes both structures have same type
if isinstance(target[key], MutableMapping):
value.update(target[key])
elif isinstance(target[key], MutableSet):
value.add(target[key])
elif isinstance(target[key], MutableSequence):
value = sorted(frozenset(value + target[key]))
else:
raise Exception("Attempt to extend a documentation fragment, invalid type for %s" % key)
target[key] = value
def _process_versions_and_dates(fragment, is_module, return_docs, callback):
def process_deprecation(deprecation, top_level=False):
collection_name = 'removed_from_collection' if top_level else 'collection_name'
if not isinstance(deprecation, MutableMapping):
return
if (is_module or top_level) and 'removed_in' in deprecation: # used in module deprecations
callback(deprecation, 'removed_in', collection_name)
if 'removed_at_date' in deprecation:
callback(deprecation, 'removed_at_date', collection_name)
if not (is_module or top_level) and 'version' in deprecation: # used in plugin option deprecations
callback(deprecation, 'version', collection_name)
def process_option_specifiers(specifiers):
for specifier in specifiers:
if not isinstance(specifier, MutableMapping):
continue
if 'version_added' in specifier:
callback(specifier, 'version_added', 'version_added_collection')
if isinstance(specifier.get('deprecated'), MutableMapping):
process_deprecation(specifier['deprecated'])
def process_options(options):
for option in options.values():
if not isinstance(option, MutableMapping):
continue
if 'version_added' in option:
callback(option, 'version_added', 'version_added_collection')
if not is_module:
if isinstance(option.get('env'), list):
process_option_specifiers(option['env'])
if isinstance(option.get('ini'), list):
process_option_specifiers(option['ini'])
if isinstance(option.get('vars'), list):
process_option_specifiers(option['vars'])
if isinstance(option.get('deprecated'), MutableMapping):
process_deprecation(option['deprecated'])
if isinstance(option.get('suboptions'), MutableMapping):
process_options(option['suboptions'])
def process_return_values(return_values):
for return_value in return_values.values():
if not isinstance(return_value, MutableMapping):
continue
if 'version_added' in return_value:
callback(return_value, 'version_added', 'version_added_collection')
if isinstance(return_value.get('contains'), MutableMapping):
process_return_values(return_value['contains'])
def process_attributes(attributes):
for attribute in attributes.values():
if not isinstance(attribute, MutableMapping):
continue
if 'version_added' in attribute:
callback(attribute, 'version_added', 'version_added_collection')
if not fragment:
return
if return_docs:
process_return_values(fragment)
return
if 'version_added' in fragment:
callback(fragment, 'version_added', 'version_added_collection')
if isinstance(fragment.get('deprecated'), MutableMapping):
process_deprecation(fragment['deprecated'], top_level=True)
if isinstance(fragment.get('options'), MutableMapping):
process_options(fragment['options'])
if isinstance(fragment.get('attributes'), MutableMapping):
process_attributes(fragment['attributes'])
def add_collection_to_versions_and_dates(fragment, collection_name, is_module, return_docs=False):
def add(options, option, collection_name_field):
if collection_name_field not in options:
options[collection_name_field] = collection_name
_process_versions_and_dates(fragment, is_module, return_docs, add)
def remove_current_collection_from_versions_and_dates(fragment, collection_name, is_module, return_docs=False):
def remove(options, option, collection_name_field):
if options.get(collection_name_field) == collection_name:
del options[collection_name_field]
_process_versions_and_dates(fragment, is_module, return_docs, remove)
def add_fragments(doc, filename, fragment_loader, is_module=False):
fragments = doc.pop('extends_documentation_fragment', [])
if isinstance(fragments, string_types):
fragments = [fragments]
unknown_fragments = []
# doc_fragments are allowed to specify a fragment var other than DOCUMENTATION
# with a . separator; this is complicated by collections-hosted doc_fragments that
# use the same separator. Assume it's collection-hosted normally first, try to load
# as-specified. If failure, assume the right-most component is a var, split it off,
# and retry the load.
for fragment_slug in fragments:
fragment_name = fragment_slug
fragment_var = 'DOCUMENTATION'
fragment_class = fragment_loader.get(fragment_name)
if fragment_class is None and '.' in fragment_slug:
splitname = fragment_slug.rsplit('.', 1)
fragment_name = splitname[0]
fragment_var = splitname[1].upper()
fragment_class = fragment_loader.get(fragment_name)
if fragment_class is None:
unknown_fragments.append(fragment_slug)
continue
fragment_yaml = getattr(fragment_class, fragment_var, None)
if fragment_yaml is None:
if fragment_var != 'DOCUMENTATION':
# if it's asking for something specific that's missing, that's an error
unknown_fragments.append(fragment_slug)
continue
else:
fragment_yaml = '{}' # TODO: this is still an error later since we require 'options' below...
fragment = AnsibleLoader(fragment_yaml, file_name=filename).get_single_data()
real_fragment_name = getattr(fragment_class, 'ansible_name')
real_collection_name = '.'.join(real_fragment_name.split('.')[0:2]) if '.' in real_fragment_name else ''
add_collection_to_versions_and_dates(fragment, real_collection_name, is_module=is_module)
if 'notes' in fragment:
notes = fragment.pop('notes')
if notes:
if 'notes' not in doc:
doc['notes'] = []
doc['notes'].extend(notes)
if 'seealso' in fragment:
seealso = fragment.pop('seealso')
if seealso:
if 'seealso' not in doc:
doc['seealso'] = []
doc['seealso'].extend(seealso)
if 'options' not in fragment and 'attributes' not in fragment:
raise Exception("missing options or attributes in fragment (%s), possibly misformatted?: %s" % (fragment_name, filename))
# ensure options themselves are directly merged
for doc_key in ['options', 'attributes']:
if doc_key in fragment:
if doc_key in doc:
try:
merge_fragment(doc[doc_key], fragment.pop(doc_key))
except Exception as e:
raise AnsibleError("%s %s (%s) of unknown type: %s" % (to_native(e), doc_key, fragment_name, filename))
else:
doc[doc_key] = fragment.pop(doc_key)
# merge rest of the sections
try:
merge_fragment(doc, fragment)
except Exception as e:
raise AnsibleError("%s (%s) of unknown type: %s" % (to_native(e), fragment_name, filename))
if unknown_fragments:
raise AnsibleError('unknown doc_fragment(s) in file {0}: {1}'.format(filename, to_native(', '.join(unknown_fragments))))
def get_docstring(filename, fragment_loader, verbose=False, ignore_errors=False, collection_name=None, is_module=None, plugin_type=None):
"""
DOCUMENTATION can be extended using documentation fragments loaded by the PluginLoader from the doc_fragments plugins.
"""
if is_module is None:
if plugin_type is None:
is_module = False
else:
is_module = (plugin_type == 'module')
else:
# TODO deprecate is_module argument, now that we have 'type'
pass
data = read_docstring(filename, verbose=verbose, ignore_errors=ignore_errors)
if data.get('doc', False):
# add collection name to versions and dates
if collection_name is not None:
add_collection_to_versions_and_dates(data['doc'], collection_name, is_module=is_module)
# add fragments to documentation
add_fragments(data['doc'], filename, fragment_loader=fragment_loader, is_module=is_module)
if data.get('returndocs', False):
# add collection name to versions and dates
if collection_name is not None:
add_collection_to_versions_and_dates(data['returndocs'], collection_name, is_module=is_module, return_docs=True)
return data['doc'], data['plainexamples'], data['returndocs'], data['metadata']
def get_versioned_doclink(path):
"""
returns a versioned documentation link for the current Ansible major.minor version; used to generate
in-product warning/error links to the configured DOCSITE_ROOT_URL
(eg, https://docs.ansible.com/ansible/2.8/somepath/doc.html)
:param path: relative path to a document under docs/docsite/rst;
:return: absolute URL to the specified doc for the current version of Ansible
"""
path = to_native(path)
try:
base_url = C.config.get_config_value('DOCSITE_ROOT_URL')
if not base_url.endswith('/'):
base_url += '/'
if path.startswith('/'):
path = path[1:]
split_ver = ansible_version.split('.')
if len(split_ver) < 3:
raise RuntimeError('invalid version ({0})'.format(ansible_version))
doc_version = '{0}.{1}'.format(split_ver[0], split_ver[1])
# check to see if it's a X.Y.0 non-rc prerelease or dev release, if so, assume devel (since the X.Y doctree
# isn't published until beta-ish)
if split_ver[2].startswith('0'):
# exclude rc; we should have the X.Y doctree live by rc1
if any((pre in split_ver[2]) for pre in ['a', 'b']) or len(split_ver) > 3 and 'dev' in split_ver[3]:
doc_version = 'devel'
return '{0}{1}/{2}'.format(base_url, doc_version, path)
except Exception as ex:
return '(unable to create versioned doc link for path {0}: {1})'.format(path, to_native(ex))
def _find_adjacent(path, plugin, extensions):
found = None
adjacent = Path(path)
plugin_base_name = plugin.split('.')[-1]
if adjacent.stem != plugin_base_name:
# this should only affect filters/tests
adjacent = adjacent.with_name(plugin_base_name)
for ext in extensions:
candidate = adjacent.with_suffix(ext)
if candidate.exists():
found = to_native(candidate)
break
return found
def find_plugin_docfile(plugin, plugin_type, loader):
''' if the plugin lives in a non-python file (eg, win_X.ps1), require the corresponding 'sidecar' file for docs '''
context = loader.find_plugin_with_context(plugin, ignore_deprecated=False, check_aliases=True)
if (not context or not context.resolved) and plugin_type in ('filter', 'test'):
# should only happen for filters/test
plugin_obj, context = loader.get_with_context(plugin)
if not context or not context.resolved:
raise AnsiblePluginNotFound('%s was not found' % (plugin), plugin_load_context=context)
docfile = Path(context.plugin_resolved_path)
if docfile.suffix not in C.DOC_EXTENSIONS:
# only look for adjacent if plugin file does not support documents
filename = _find_adjacent(docfile, plugin, C.DOC_EXTENSIONS)
else:
filename = to_native(docfile)
if filename is None:
raise AnsibleError('%s cannot contain DOCUMENTATION nor does it have a companion documentation file' % (plugin))
return filename, context.plugin_resolved_collection
def get_plugin_docs(plugin, plugin_type, loader, fragment_loader, verbose):
docs = []
# find plugin doc file, if it doesn't exist this will throw error, we let it through
# can raise exception and short circuit when 'not found'
filename, collection_name = find_plugin_docfile(plugin, plugin_type, loader)
try:
docs = get_docstring(filename, fragment_loader, verbose=verbose, collection_name=collection_name, plugin_type=plugin_type)
except Exception as e:
raise AnsibleParserError('%s did not contain a DOCUMENTATION attribute (%s)' % (plugin, filename), orig_exc=e)
# no good? try adjacent
if not docs[0]:
try:
newfile = _find_adjacent(filename, plugin, C.DOC_EXTENSIONS)
if newfile:
docs = get_docstring(newfile, fragment_loader, verbose=verbose, collection_name=collection_name, plugin_type=plugin_type)
filename = newfile
except Exception as e:
raise AnsibleParserError('Adjacent file %s did not contain a DOCUMENTATION attribute (%s)' % (plugin, filename), orig_exc=e)
# add extra data to docs[0] (aka 'DOCUMENTATION')
if docs[0] is None:
raise AnsibleParserError('No documentation available for %s (%s)' % (plugin, filename))
else:
docs[0]['filename'] = filename
docs[0]['collection'] = collection_name
return docs
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
changelogs/fragments/78678-add-a-g-install-offline.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
lib/ansible/cli/galaxy.py
|
#!/usr/bin/env python
# Copyright: (c) 2013, James Cammarata <[email protected]>
# Copyright: (c) 2018-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import json
import os.path
import re
import shutil
import sys
import textwrap
import time
from yaml.error import YAMLError
import ansible.constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleError, AnsibleOptionsError
from ansible.galaxy import Galaxy, get_collections_galaxy_meta_info
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection import (
build_collection,
download_collections,
find_existing_collections,
install_collections,
publish_collection,
validate_collection_name,
validate_collection_path,
verify_collections,
SIGNATURE_COUNT_RE,
)
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.collection.gpg import GPG_ERROR_MAP
from ansible.galaxy.dependency_resolution.dataclasses import Requirement
from ansible.galaxy.role import GalaxyRole
from ansible.galaxy.token import BasicAuthToken, GalaxyToken, KeycloakToken, NoTokenSentinel
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.module_utils.common.collections import is_iterable
from ansible.module_utils.common.yaml import yaml_dump, yaml_load
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils import six
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.playbook.role.requirement import RoleRequirement
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig
from ansible.utils.display import Display
from ansible.utils.plugin_docs import get_versioned_doclink
display = Display()
urlparse = six.moves.urllib.parse.urlparse
# config definition by position: name, required, type
SERVER_DEF = [
('url', True, 'str'),
('username', False, 'str'),
('password', False, 'str'),
('token', False, 'str'),
('auth_url', False, 'str'),
('v3', False, 'bool'),
('validate_certs', False, 'bool'),
('client_id', False, 'str'),
('timeout', False, 'int'),
]
# config definition fields
SERVER_ADDITIONAL = {
'v3': {'default': 'False'},
'validate_certs': {'default': True, 'cli': [{'name': 'validate_certs'}]},
'timeout': {'default': '60', 'cli': [{'name': 'timeout'}]},
'token': {'default': None},
}
# override default if the generic is set
if C.GALAXY_IGNORE_CERTS is not None:
SERVER_ADDITIONAL['validate_certs'].update({'default': not C.GALAXY_IGNORE_CERTS})
def with_collection_artifacts_manager(wrapped_method):
"""Inject an artifacts manager if not passed explicitly.
This decorator constructs a ConcreteArtifactsManager and maintains
the related temporary directory auto-cleanup around the target
method invocation.
"""
def method_wrapper(*args, **kwargs):
if 'artifacts_manager' in kwargs:
return wrapped_method(*args, **kwargs)
artifacts_manager_kwargs = {'validate_certs': context.CLIARGS['validate_certs']}
keyring = context.CLIARGS.get('keyring', None)
if keyring is not None:
artifacts_manager_kwargs.update({
'keyring': GalaxyCLI._resolve_path(keyring),
'required_signature_count': context.CLIARGS.get('required_valid_signature_count', None),
'ignore_signature_errors': context.CLIARGS.get('ignore_gpg_errors', None),
})
with ConcreteArtifactsManager.under_tmpdir(
C.DEFAULT_LOCAL_TMP,
**artifacts_manager_kwargs
) as concrete_artifact_cm:
kwargs['artifacts_manager'] = concrete_artifact_cm
return wrapped_method(*args, **kwargs)
return method_wrapper
def _display_header(path, h1, h2, w1=10, w2=7):
display.display('\n# {0}\n{1:{cwidth}} {2:{vwidth}}\n{3} {4}\n'.format(
path,
h1,
h2,
'-' * max([len(h1), w1]), # Make sure that the number of dashes is at least the width of the header
'-' * max([len(h2), w2]),
cwidth=w1,
vwidth=w2,
))
def _display_role(gr):
install_info = gr.install_info
version = None
if install_info:
version = install_info.get("version", None)
if not version:
version = "(unknown version)"
display.display("- %s, %s" % (gr.name, version))
def _display_collection(collection, cwidth=10, vwidth=7, min_cwidth=10, min_vwidth=7):
display.display('{fqcn:{cwidth}} {version:{vwidth}}'.format(
fqcn=to_text(collection.fqcn),
version=collection.ver,
cwidth=max(cwidth, min_cwidth), # Make sure the width isn't smaller than the header
vwidth=max(vwidth, min_vwidth)
))
def _get_collection_widths(collections):
if not is_iterable(collections):
collections = (collections, )
fqcn_set = {to_text(c.fqcn) for c in collections}
version_set = {to_text(c.ver) for c in collections}
fqcn_length = len(max(fqcn_set, key=len))
version_length = len(max(version_set, key=len))
return fqcn_length, version_length
def validate_signature_count(value):
match = re.match(SIGNATURE_COUNT_RE, value)
if match is None:
raise ValueError(f"{value} is not a valid signature count value")
return value
class GalaxyCLI(CLI):
'''command to manage Ansible roles in shared repositories, the default of which is Ansible Galaxy *https://galaxy.ansible.com*.'''
name = 'ansible-galaxy'
SKIP_INFO_KEYS = ("name", "description", "readme_html", "related", "summary_fields", "average_aw_composite", "average_aw_score", "url")
def __init__(self, args):
self._raw_args = args
self._implicit_role = False
if len(args) > 1:
# Inject role into sys.argv[1] as a backwards compatibility step
if args[1] not in ['-h', '--help', '--version'] and 'role' not in args and 'collection' not in args:
# TODO: Should we add a warning here and eventually deprecate the implicit role subcommand choice
args.insert(1, 'role')
self._implicit_role = True
# since argparse doesn't allow hidden subparsers, handle dead login arg from raw args after "role" normalization
if args[1:3] == ['role', 'login']:
display.error(
"The login command was removed in late 2020. An API key is now required to publish roles or collections "
"to Galaxy. The key can be found at https://galaxy.ansible.com/me/preferences, and passed to the "
"ansible-galaxy CLI via a file at {0} or (insecurely) via the `--token` "
"command-line argument.".format(to_text(C.GALAXY_TOKEN_PATH)))
sys.exit(1)
self.api_servers = []
self.galaxy = None
self._api = None
super(GalaxyCLI, self).__init__(args)
def init_parser(self):
''' create an options parser for bin/ansible '''
super(GalaxyCLI, self).init_parser(
desc="Perform various Role and Collection related operations.",
)
# Common arguments that apply to more than 1 action
common = opt_help.argparse.ArgumentParser(add_help=False)
common.add_argument('-s', '--server', dest='api_server', help='The Galaxy API server URL')
common.add_argument('--token', '--api-key', dest='api_key',
help='The Ansible Galaxy API key which can be found at '
'https://galaxy.ansible.com/me/preferences.')
common.add_argument('-c', '--ignore-certs', action='store_true', dest='ignore_certs', help='Ignore SSL certificate validation errors.', default=None)
common.add_argument('--timeout', dest='timeout', type=int,
help="The time to wait for operations against the galaxy server, defaults to 60s.")
opt_help.add_verbosity_options(common)
force = opt_help.argparse.ArgumentParser(add_help=False)
force.add_argument('-f', '--force', dest='force', action='store_true', default=False,
help='Force overwriting an existing role or collection')
github = opt_help.argparse.ArgumentParser(add_help=False)
github.add_argument('github_user', help='GitHub username')
github.add_argument('github_repo', help='GitHub repository')
offline = opt_help.argparse.ArgumentParser(add_help=False)
offline.add_argument('--offline', dest='offline', default=False, action='store_true',
help="Don't query the galaxy API when creating roles")
default_roles_path = C.config.get_configuration_definition('DEFAULT_ROLES_PATH').get('default', '')
roles_path = opt_help.argparse.ArgumentParser(add_help=False)
roles_path.add_argument('-p', '--roles-path', dest='roles_path', type=opt_help.unfrack_path(pathsep=True),
default=C.DEFAULT_ROLES_PATH, action=opt_help.PrependListAction,
help='The path to the directory containing your roles. The default is the first '
'writable one configured via DEFAULT_ROLES_PATH: %s ' % default_roles_path)
collections_path = opt_help.argparse.ArgumentParser(add_help=False)
collections_path.add_argument('-p', '--collections-path', dest='collections_path', type=opt_help.unfrack_path(pathsep=True),
default=AnsibleCollectionConfig.collection_paths,
action=opt_help.PrependListAction,
help="One or more directories to search for collections in addition "
"to the default COLLECTIONS_PATHS. Separate multiple paths "
"with '{0}'.".format(os.path.pathsep))
cache_options = opt_help.argparse.ArgumentParser(add_help=False)
cache_options.add_argument('--clear-response-cache', dest='clear_response_cache', action='store_true',
default=False, help='Clear the existing server response cache.')
cache_options.add_argument('--no-cache', dest='no_cache', action='store_true', default=False,
help='Do not use the server response cache.')
# Add sub parser for the Galaxy role type (role or collection)
type_parser = self.parser.add_subparsers(metavar='TYPE', dest='type')
type_parser.required = True
# Add sub parser for the Galaxy collection actions
collection = type_parser.add_parser('collection', help='Manage an Ansible Galaxy collection.')
collection_parser = collection.add_subparsers(metavar='COLLECTION_ACTION', dest='action')
collection_parser.required = True
self.add_download_options(collection_parser, parents=[common, cache_options])
self.add_init_options(collection_parser, parents=[common, force])
self.add_build_options(collection_parser, parents=[common, force])
self.add_publish_options(collection_parser, parents=[common])
self.add_install_options(collection_parser, parents=[common, force, cache_options])
self.add_list_options(collection_parser, parents=[common, collections_path])
self.add_verify_options(collection_parser, parents=[common, collections_path])
# Add sub parser for the Galaxy role actions
role = type_parser.add_parser('role', help='Manage an Ansible Galaxy role.')
role_parser = role.add_subparsers(metavar='ROLE_ACTION', dest='action')
role_parser.required = True
self.add_init_options(role_parser, parents=[common, force, offline])
self.add_remove_options(role_parser, parents=[common, roles_path])
self.add_delete_options(role_parser, parents=[common, github])
self.add_list_options(role_parser, parents=[common, roles_path])
self.add_search_options(role_parser, parents=[common])
self.add_import_options(role_parser, parents=[common, github])
self.add_setup_options(role_parser, parents=[common, roles_path])
self.add_info_options(role_parser, parents=[common, roles_path, offline])
self.add_install_options(role_parser, parents=[common, force, roles_path])
def add_download_options(self, parser, parents=None):
download_parser = parser.add_parser('download', parents=parents,
help='Download collections and their dependencies as a tarball for an '
'offline install.')
download_parser.set_defaults(func=self.execute_download)
download_parser.add_argument('args', help='Collection(s)', metavar='collection', nargs='*')
download_parser.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download collection(s) listed as dependencies.")
download_parser.add_argument('-p', '--download-path', dest='download_path',
default='./collections',
help='The directory to download the collections to.')
download_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be downloaded.')
download_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
def add_init_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
init_parser = parser.add_parser('init', parents=parents,
help='Initialize new {0} with the base structure of a '
'{0}.'.format(galaxy_type))
init_parser.set_defaults(func=self.execute_init)
init_parser.add_argument('--init-path', dest='init_path', default='./',
help='The path in which the skeleton {0} will be created. The default is the '
'current working directory.'.format(galaxy_type))
init_parser.add_argument('--{0}-skeleton'.format(galaxy_type), dest='{0}_skeleton'.format(galaxy_type),
default=C.GALAXY_COLLECTION_SKELETON if galaxy_type == 'collection' else C.GALAXY_ROLE_SKELETON,
help='The path to a {0} skeleton that the new {0} should be based '
'upon.'.format(galaxy_type))
obj_name_kwargs = {}
if galaxy_type == 'collection':
obj_name_kwargs['type'] = validate_collection_name
init_parser.add_argument('{0}_name'.format(galaxy_type), help='{0} name'.format(galaxy_type.capitalize()),
**obj_name_kwargs)
if galaxy_type == 'role':
init_parser.add_argument('--type', dest='role_type', action='store', default='default',
help="Initialize using an alternate role type. Valid types include: 'container', "
"'apb' and 'network'.")
def add_remove_options(self, parser, parents=None):
remove_parser = parser.add_parser('remove', parents=parents, help='Delete roles from roles_path.')
remove_parser.set_defaults(func=self.execute_remove)
remove_parser.add_argument('args', help='Role(s)', metavar='role', nargs='+')
def add_delete_options(self, parser, parents=None):
delete_parser = parser.add_parser('delete', parents=parents,
help='Removes the role from Galaxy. It does not remove or alter the actual '
'GitHub repository.')
delete_parser.set_defaults(func=self.execute_delete)
def add_list_options(self, parser, parents=None):
galaxy_type = 'role'
if parser.metavar == 'COLLECTION_ACTION':
galaxy_type = 'collection'
list_parser = parser.add_parser('list', parents=parents,
help='Show the name and version of each {0} installed in the {0}s_path.'.format(galaxy_type))
list_parser.set_defaults(func=self.execute_list)
list_parser.add_argument(galaxy_type, help=galaxy_type.capitalize(), nargs='?', metavar=galaxy_type)
if galaxy_type == 'collection':
list_parser.add_argument('--format', dest='output_format', choices=('human', 'yaml', 'json'), default='human',
help="Format to display the list of collections in.")
def add_search_options(self, parser, parents=None):
search_parser = parser.add_parser('search', parents=parents,
help='Search the Galaxy database by tags, platforms, author and multiple '
'keywords.')
search_parser.set_defaults(func=self.execute_search)
search_parser.add_argument('--platforms', dest='platforms', help='list of OS platforms to filter by')
search_parser.add_argument('--galaxy-tags', dest='galaxy_tags', help='list of galaxy tags to filter by')
search_parser.add_argument('--author', dest='author', help='GitHub username')
search_parser.add_argument('args', help='Search terms', metavar='searchterm', nargs='*')
def add_import_options(self, parser, parents=None):
import_parser = parser.add_parser('import', parents=parents, help='Import a role into a galaxy server')
import_parser.set_defaults(func=self.execute_import)
import_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import results.")
import_parser.add_argument('--branch', dest='reference',
help='The name of a branch to import. Defaults to the repository\'s default branch '
'(usually master)')
import_parser.add_argument('--role-name', dest='role_name',
help='The name the role should have, if different than the repo name')
import_parser.add_argument('--status', dest='check_status', action='store_true', default=False,
help='Check the status of the most recent import request for given github_'
'user/github_repo.')
def add_setup_options(self, parser, parents=None):
setup_parser = parser.add_parser('setup', parents=parents,
help='Manage the integration between Galaxy and the given source.')
setup_parser.set_defaults(func=self.execute_setup)
setup_parser.add_argument('--remove', dest='remove_id', default=None,
help='Remove the integration matching the provided ID value. Use --list to see '
'ID values.')
setup_parser.add_argument('--list', dest="setup_list", action='store_true', default=False,
help='List all of your integrations.')
setup_parser.add_argument('source', help='Source')
setup_parser.add_argument('github_user', help='GitHub username')
setup_parser.add_argument('github_repo', help='GitHub repository')
setup_parser.add_argument('secret', help='Secret')
def add_info_options(self, parser, parents=None):
info_parser = parser.add_parser('info', parents=parents, help='View more details about a specific role.')
info_parser.set_defaults(func=self.execute_info)
info_parser.add_argument('args', nargs='+', help='role', metavar='role_name[,version]')
def add_verify_options(self, parser, parents=None):
galaxy_type = 'collection'
verify_parser = parser.add_parser('verify', parents=parents, help='Compare checksums with the collection(s) '
'found on the server and the installed copy. This does not verify dependencies.')
verify_parser.set_defaults(func=self.execute_verify)
verify_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', help='The installed collection(s) name. '
'This is mutually exclusive with --requirements-file.')
verify_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help='Ignore errors during verification and continue with the next specified collection.')
verify_parser.add_argument('--offline', dest='offline', action='store_true', default=False,
help='Validate collection integrity locally without contacting server for '
'canonical manifest hash.')
verify_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be verified.')
verify_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
verify_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before using '
'it to verify the rest of the contents of a collection from a Galaxy server. Use in '
'conjunction with a positional collection name (mutually exclusive with --requirements-file).')
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or all to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
verify_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
verify_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
def add_install_options(self, parser, parents=None):
galaxy_type = 'collection' if parser.metavar == 'COLLECTION_ACTION' else 'role'
args_kwargs = {}
if galaxy_type == 'collection':
args_kwargs['help'] = 'The collection(s) name or path/url to a tar.gz collection artifact. This is ' \
'mutually exclusive with --requirements-file.'
ignore_errors_help = 'Ignore errors during installation and continue with the next specified ' \
'collection. This will not ignore dependency conflict errors.'
else:
args_kwargs['help'] = 'Role name, URL or tar file'
ignore_errors_help = 'Ignore errors and continue with the next specified role.'
install_parser = parser.add_parser('install', parents=parents,
help='Install {0}(s) from file(s), URL(s) or Ansible '
'Galaxy'.format(galaxy_type))
install_parser.set_defaults(func=self.execute_install)
install_parser.add_argument('args', metavar='{0}_name'.format(galaxy_type), nargs='*', **args_kwargs)
install_parser.add_argument('-i', '--ignore-errors', dest='ignore_errors', action='store_true', default=False,
help=ignore_errors_help)
install_exclusive = install_parser.add_mutually_exclusive_group()
install_exclusive.add_argument('-n', '--no-deps', dest='no_deps', action='store_true', default=False,
help="Don't download {0}s listed as dependencies.".format(galaxy_type))
install_exclusive.add_argument('--force-with-deps', dest='force_with_deps', action='store_true', default=False,
help="Force overwriting an existing {0} and its "
"dependencies.".format(galaxy_type))
valid_signature_count_help = 'The number of signatures that must successfully verify the collection. This should be a positive integer ' \
'or -1 to signify that all signatures must be used to verify the collection. ' \
'Prepend the value with + to fail if no valid signatures are found for the collection (e.g. +all).'
ignore_gpg_status_help = 'A status code to ignore during signature verification (for example, NO_PUBKEY). ' \
'Provide this option multiple times to ignore a list of status codes. ' \
'Descriptions for the choices can be seen at L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes).'
if galaxy_type == 'collection':
install_parser.add_argument('-p', '--collections-path', dest='collections_path',
default=self._get_default_collection_path(),
help='The path to the directory containing your collections.')
install_parser.add_argument('-r', '--requirements-file', dest='requirements',
help='A file containing a list of collections to be installed.')
install_parser.add_argument('--pre', dest='allow_pre_release', action='store_true',
help='Include pre-release versions. Semantic versioning pre-releases are ignored by default')
install_parser.add_argument('-U', '--upgrade', dest='upgrade', action='store_true', default=False,
help='Upgrade installed collection artifacts. This will also update dependencies unless --no-deps is provided')
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during signature verification') # Eventually default to ~/.ansible/pubring.kbx?
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--signature', dest='signatures', action='append',
help='An additional signature source to verify the authenticity of the MANIFEST.json before '
'installing the collection from a Galaxy server. Use in conjunction with a positional '
'collection name (mutually exclusive with --requirements-file).')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
else:
install_parser.add_argument('-r', '--role-file', dest='requirements',
help='A file containing a list of roles to be installed.')
r_re = re.compile(r'^(?<!-)-[a-zA-Z]*r[a-zA-Z]*') # -r, -fr
contains_r = bool([a for a in self._raw_args if r_re.match(a)])
role_file_re = re.compile(r'--role-file($|=)') # --role-file foo, --role-file=foo
contains_role_file = bool([a for a in self._raw_args if role_file_re.match(a)])
if self._implicit_role and (contains_r or contains_role_file):
# Any collections in the requirements files will also be installed
install_parser.add_argument('--keyring', dest='keyring', default=C.GALAXY_GPG_KEYRING,
help='The keyring used during collection signature verification')
install_parser.add_argument('--disable-gpg-verify', dest='disable_gpg_verify', action='store_true',
default=C.GALAXY_DISABLE_GPG_VERIFY,
help='Disable GPG signature verification when installing collections from a Galaxy server')
install_parser.add_argument('--required-valid-signature-count', dest='required_valid_signature_count', type=validate_signature_count,
help=valid_signature_count_help, default=C.GALAXY_REQUIRED_VALID_SIGNATURE_COUNT)
install_parser.add_argument('--ignore-signature-status-code', dest='ignore_gpg_errors', type=str, action='append',
help=ignore_gpg_status_help, default=C.GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES,
choices=list(GPG_ERROR_MAP.keys()))
install_parser.add_argument('-g', '--keep-scm-meta', dest='keep_scm_meta', action='store_true',
default=False,
help='Use tar instead of the scm archive option when packaging the role.')
def add_build_options(self, parser, parents=None):
build_parser = parser.add_parser('build', parents=parents,
help='Build an Ansible collection artifact that can be published to Ansible '
'Galaxy.')
build_parser.set_defaults(func=self.execute_build)
build_parser.add_argument('args', metavar='collection', nargs='*', default=('.',),
help='Path to the collection(s) directory to build. This should be the directory '
'that contains the galaxy.yml file. The default is the current working '
'directory.')
build_parser.add_argument('--output-path', dest='output_path', default='./',
help='The path in which the collection is built to. The default is the current '
'working directory.')
def add_publish_options(self, parser, parents=None):
publish_parser = parser.add_parser('publish', parents=parents,
help='Publish a collection artifact to Ansible Galaxy.')
publish_parser.set_defaults(func=self.execute_publish)
publish_parser.add_argument('args', metavar='collection_path',
help='The path to the collection tarball to publish.')
publish_parser.add_argument('--no-wait', dest='wait', action='store_false', default=True,
help="Don't wait for import validation results.")
publish_parser.add_argument('--import-timeout', dest='import_timeout', type=int, default=0,
help="The time to wait for the collection import process to finish.")
def post_process_args(self, options):
options = super(GalaxyCLI, self).post_process_args(options)
# ensure we have 'usable' cli option
setattr(options, 'validate_certs', (None if options.ignore_certs is None else not options.ignore_certs))
display.verbosity = options.verbosity
return options
def run(self):
super(GalaxyCLI, self).run()
self.galaxy = Galaxy()
def server_config_def(section, key, required, option_type):
config_def = {
'description': 'The %s of the %s Galaxy server' % (key, section),
'ini': [
{
'section': 'galaxy_server.%s' % section,
'key': key,
}
],
'env': [
{'name': 'ANSIBLE_GALAXY_SERVER_%s_%s' % (section.upper(), key.upper())},
],
'required': required,
'type': option_type,
}
if key in SERVER_ADDITIONAL:
config_def.update(SERVER_ADDITIONAL[key])
return config_def
galaxy_options = {}
for optional_key in ['clear_response_cache', 'no_cache', 'timeout']:
if optional_key in context.CLIARGS:
galaxy_options[optional_key] = context.CLIARGS[optional_key]
config_servers = []
# Need to filter out empty strings or non truthy values as an empty server list env var is equal to [''].
server_list = [s for s in C.GALAXY_SERVER_LIST or [] if s]
for server_priority, server_key in enumerate(server_list, start=1):
# Abuse the 'plugin config' by making 'galaxy_server' a type of plugin
# Config definitions are looked up dynamically based on the C.GALAXY_SERVER_LIST entry. We look up the
# section [galaxy_server.<server>] for the values url, username, password, and token.
config_dict = dict((k, server_config_def(server_key, k, req, ensure_type)) for k, req, ensure_type in SERVER_DEF)
defs = AnsibleLoader(yaml_dump(config_dict)).get_single_data()
C.config.initialize_plugin_configuration_definitions('galaxy_server', server_key, defs)
# resolve the config created options above with existing config and user options
server_options = C.config.get_plugin_options('galaxy_server', server_key)
# auth_url is used to create the token, but not directly by GalaxyAPI, so
# it doesn't need to be passed as kwarg to GalaxyApi, same for others we pop here
auth_url = server_options.pop('auth_url')
client_id = server_options.pop('client_id')
token_val = server_options['token'] or NoTokenSentinel
username = server_options['username']
v3 = server_options.pop('v3')
validate_certs = server_options['validate_certs']
if v3:
# This allows a user to explicitly indicate the server uses the /v3 API
# This was added for testing against pulp_ansible and I'm not sure it has
# a practical purpose outside of this use case. As such, this option is not
# documented as of now
server_options['available_api_versions'] = {'v3': '/v3'}
# default case if no auth info is provided.
server_options['token'] = None
if username:
server_options['token'] = BasicAuthToken(username, server_options['password'])
else:
if token_val:
if auth_url:
server_options['token'] = KeycloakToken(access_token=token_val,
auth_url=auth_url,
validate_certs=validate_certs,
client_id=client_id)
else:
# The galaxy v1 / github / django / 'Token'
server_options['token'] = GalaxyToken(token=token_val)
server_options.update(galaxy_options)
config_servers.append(GalaxyAPI(
self.galaxy, server_key,
priority=server_priority,
**server_options
))
cmd_server = context.CLIARGS['api_server']
cmd_token = GalaxyToken(token=context.CLIARGS['api_key'])
# resolve validate_certs
v_config_default = True if C.GALAXY_IGNORE_CERTS is None else not C.GALAXY_IGNORE_CERTS
validate_certs = v_config_default if context.CLIARGS['validate_certs'] is None else context.CLIARGS['validate_certs']
if cmd_server:
# Cmd args take precedence over the config entry but fist check if the arg was a name and use that config
# entry, otherwise create a new API entry for the server specified.
config_server = next((s for s in config_servers if s.name == cmd_server), None)
if config_server:
self.api_servers.append(config_server)
else:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'cmd_arg', cmd_server, token=cmd_token,
priority=len(config_servers) + 1,
validate_certs=validate_certs,
**galaxy_options
))
else:
self.api_servers = config_servers
# Default to C.GALAXY_SERVER if no servers were defined
if len(self.api_servers) == 0:
self.api_servers.append(GalaxyAPI(
self.galaxy, 'default', C.GALAXY_SERVER, token=cmd_token,
priority=0,
validate_certs=validate_certs,
**galaxy_options
))
return context.CLIARGS['func']()
@property
def api(self):
if self._api:
return self._api
for server in self.api_servers:
try:
if u'v1' in server.available_api_versions:
self._api = server
break
except Exception:
continue
if not self._api:
self._api = self.api_servers[0]
return self._api
def _get_default_collection_path(self):
return C.COLLECTIONS_PATHS[0]
def _parse_requirements_file(self, requirements_file, allow_old_format=True, artifacts_manager=None, validate_signature_options=True):
"""
Parses an Ansible requirement.yml file and returns all the roles and/or collections defined in it. There are 2
requirements file format:
# v1 (roles only)
- src: The source of the role, required if include is not set. Can be Galaxy role name, URL to a SCM repo or tarball.
name: Downloads the role to the specified name, defaults to Galaxy name from Galaxy or name of repo if src is a URL.
scm: If src is a URL, specify the SCM. Only git or hd are supported and defaults ot git.
version: The version of the role to download. Can also be tag, commit, or branch name and defaults to master.
include: Path to additional requirements.yml files.
# v2 (roles and collections)
---
roles:
# Same as v1 format just under the roles key
collections:
- namespace.collection
- name: namespace.collection
version: version identifier, multiple identifiers are separated by ','
source: the URL or a predefined source name that relates to C.GALAXY_SERVER_LIST
type: git|file|url|galaxy
:param requirements_file: The path to the requirements file.
:param allow_old_format: Will fail if a v1 requirements file is found and this is set to False.
:param artifacts_manager: Artifacts manager.
:return: a dict containing roles and collections to found in the requirements file.
"""
requirements = {
'roles': [],
'collections': [],
}
b_requirements_file = to_bytes(requirements_file, errors='surrogate_or_strict')
if not os.path.exists(b_requirements_file):
raise AnsibleError("The requirements file '%s' does not exist." % to_native(requirements_file))
display.vvv("Reading requirement file at '%s'" % requirements_file)
with open(b_requirements_file, 'rb') as req_obj:
try:
file_requirements = yaml_load(req_obj)
except YAMLError as err:
raise AnsibleError(
"Failed to parse the requirements yml at '%s' with the following error:\n%s"
% (to_native(requirements_file), to_native(err)))
if file_requirements is None:
raise AnsibleError("No requirements found in file '%s'" % to_native(requirements_file))
def parse_role_req(requirement):
if "include" not in requirement:
role = RoleRequirement.role_yaml_parse(requirement)
display.vvv("found role %s in yaml file" % to_text(role))
if "name" not in role and "src" not in role:
raise AnsibleError("Must specify name or src for role")
return [GalaxyRole(self.galaxy, self.api, **role)]
else:
b_include_path = to_bytes(requirement["include"], errors="surrogate_or_strict")
if not os.path.isfile(b_include_path):
raise AnsibleError("Failed to find include requirements file '%s' in '%s'"
% (to_native(b_include_path), to_native(requirements_file)))
with open(b_include_path, 'rb') as f_include:
try:
return [GalaxyRole(self.galaxy, self.api, **r) for r in
(RoleRequirement.role_yaml_parse(i) for i in yaml_load(f_include))]
except Exception as e:
raise AnsibleError("Unable to load data from include requirements file: %s %s"
% (to_native(requirements_file), to_native(e)))
if isinstance(file_requirements, list):
# Older format that contains only roles
if not allow_old_format:
raise AnsibleError("Expecting requirements file to be a dict with the key 'collections' that contains "
"a list of collections to install")
for role_req in file_requirements:
requirements['roles'] += parse_role_req(role_req)
else:
# Newer format with a collections and/or roles key
extra_keys = set(file_requirements.keys()).difference(set(['roles', 'collections']))
if extra_keys:
raise AnsibleError("Expecting only 'roles' and/or 'collections' as base keys in the requirements "
"file. Found: %s" % (to_native(", ".join(extra_keys))))
for role_req in file_requirements.get('roles') or []:
requirements['roles'] += parse_role_req(role_req)
requirements['collections'] = [
Requirement.from_requirement_dict(
self._init_coll_req_dict(collection_req),
artifacts_manager,
validate_signature_options,
)
for collection_req in file_requirements.get('collections') or []
]
return requirements
def _init_coll_req_dict(self, coll_req):
if not isinstance(coll_req, dict):
# Assume it's a string:
return {'name': coll_req}
if (
'name' not in coll_req or
not coll_req.get('source') or
coll_req.get('type', 'galaxy') != 'galaxy'
):
return coll_req
# Try and match up the requirement source with our list of Galaxy API
# servers defined in the config, otherwise create a server with that
# URL without any auth.
coll_req['source'] = next(
iter(
srvr for srvr in self.api_servers
if coll_req['source'] in {srvr.name, srvr.api_server}
),
GalaxyAPI(
self.galaxy,
'explicit_requirement_{name!s}'.format(
name=coll_req['name'],
),
coll_req['source'],
validate_certs=not context.CLIARGS['ignore_certs'],
),
)
return coll_req
@staticmethod
def exit_without_ignore(rc=1):
"""
Exits with the specified return code unless the
option --ignore-errors was specified
"""
if not context.CLIARGS['ignore_errors']:
raise AnsibleError('- you can use --ignore-errors to skip failed roles and finish processing the list.')
@staticmethod
def _display_role_info(role_info):
text = [u"", u"Role: %s" % to_text(role_info['name'])]
# Get the top-level 'description' first, falling back to galaxy_info['galaxy_info']['description'].
galaxy_info = role_info.get('galaxy_info', {})
description = role_info.get('description', galaxy_info.get('description', ''))
text.append(u"\tdescription: %s" % description)
for k in sorted(role_info.keys()):
if k in GalaxyCLI.SKIP_INFO_KEYS:
continue
if isinstance(role_info[k], dict):
text.append(u"\t%s:" % (k))
for key in sorted(role_info[k].keys()):
if key in GalaxyCLI.SKIP_INFO_KEYS:
continue
text.append(u"\t\t%s: %s" % (key, role_info[k][key]))
else:
text.append(u"\t%s: %s" % (k, role_info[k]))
# make sure we have a trailing newline returned
text.append(u"")
return u'\n'.join(text)
@staticmethod
def _resolve_path(path):
return os.path.abspath(os.path.expanduser(os.path.expandvars(path)))
@staticmethod
def _get_skeleton_galaxy_yml(template_path, inject_data):
with open(to_bytes(template_path, errors='surrogate_or_strict'), 'rb') as template_obj:
meta_template = to_text(template_obj.read(), errors='surrogate_or_strict')
galaxy_meta = get_collections_galaxy_meta_info()
required_config = []
optional_config = []
for meta_entry in galaxy_meta:
config_list = required_config if meta_entry.get('required', False) else optional_config
value = inject_data.get(meta_entry['key'], None)
if not value:
meta_type = meta_entry.get('type', 'str')
if meta_type == 'str':
value = ''
elif meta_type == 'list':
value = []
elif meta_type == 'dict':
value = {}
meta_entry['value'] = value
config_list.append(meta_entry)
link_pattern = re.compile(r"L\(([^)]+),\s+([^)]+)\)")
const_pattern = re.compile(r"C\(([^)]+)\)")
def comment_ify(v):
if isinstance(v, list):
v = ". ".join([l.rstrip('.') for l in v])
v = link_pattern.sub(r"\1 <\2>", v)
v = const_pattern.sub(r"'\1'", v)
return textwrap.fill(v, width=117, initial_indent="# ", subsequent_indent="# ", break_on_hyphens=False)
loader = DataLoader()
templar = Templar(loader, variables={'required_config': required_config, 'optional_config': optional_config})
templar.environment.filters['comment_ify'] = comment_ify
meta_value = templar.template(meta_template)
return meta_value
def _require_one_of_collections_requirements(
self, collections, requirements_file,
signatures=None,
artifacts_manager=None,
):
if collections and requirements_file:
raise AnsibleError("The positional collection_name arg and --requirements-file are mutually exclusive.")
elif not collections and not requirements_file:
raise AnsibleError("You must specify a collection name or a requirements file.")
elif requirements_file:
if signatures is not None:
raise AnsibleError(
"The --signatures option and --requirements-file are mutually exclusive. "
"Use the --signatures with positional collection_name args or provide a "
"'signatures' key for requirements in the --requirements-file."
)
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._parse_requirements_file(
requirements_file,
allow_old_format=False,
artifacts_manager=artifacts_manager,
)
else:
requirements = {
'collections': [
Requirement.from_string(coll_input, artifacts_manager, signatures)
for coll_input in collections
],
'roles': [],
}
return requirements
############################
# execute actions
############################
def execute_role(self):
"""
Perform the action on an Ansible Galaxy role. Must be combined with a further action like delete/install/init
as listed below.
"""
# To satisfy doc build
pass
def execute_collection(self):
"""
Perform the action on an Ansible Galaxy collection. Must be combined with a further action like init/install as
listed below.
"""
# To satisfy doc build
pass
def execute_build(self):
"""
Build an Ansible Galaxy collection artifact that can be stored in a central repository like Ansible Galaxy.
By default, this command builds from the current working directory. You can optionally pass in the
collection input path (where the ``galaxy.yml`` file is).
"""
force = context.CLIARGS['force']
output_path = GalaxyCLI._resolve_path(context.CLIARGS['output_path'])
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
elif os.path.isfile(b_output_path):
raise AnsibleError("- the output collection directory %s is a file - aborting" % to_native(output_path))
for collection_path in context.CLIARGS['args']:
collection_path = GalaxyCLI._resolve_path(collection_path)
build_collection(
to_text(collection_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force,
)
@with_collection_artifacts_manager
def execute_download(self, artifacts_manager=None):
collections = context.CLIARGS['args']
no_deps = context.CLIARGS['no_deps']
download_path = context.CLIARGS['download_path']
requirements_file = context.CLIARGS['requirements']
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
artifacts_manager=artifacts_manager,
)['collections']
download_path = GalaxyCLI._resolve_path(download_path)
b_download_path = to_bytes(download_path, errors='surrogate_or_strict')
if not os.path.exists(b_download_path):
os.makedirs(b_download_path)
download_collections(
requirements, download_path, self.api_servers, no_deps,
context.CLIARGS['allow_pre_release'],
artifacts_manager=artifacts_manager,
)
return 0
def execute_init(self):
"""
Creates the skeleton framework of a role or collection that complies with the Galaxy metadata format.
Requires a role or collection name. The collection name must be in the format ``<namespace>.<collection>``.
"""
galaxy_type = context.CLIARGS['type']
init_path = context.CLIARGS['init_path']
force = context.CLIARGS['force']
obj_skeleton = context.CLIARGS['{0}_skeleton'.format(galaxy_type)]
obj_name = context.CLIARGS['{0}_name'.format(galaxy_type)]
inject_data = dict(
description='your {0} description'.format(galaxy_type),
ansible_plugin_list_dir=get_versioned_doclink('plugins/plugins.html'),
)
if galaxy_type == 'role':
inject_data.update(dict(
author='your name',
company='your company (optional)',
license='license (GPL-2.0-or-later, MIT, etc)',
role_name=obj_name,
role_type=context.CLIARGS['role_type'],
issue_tracker_url='http://example.com/issue/tracker',
repository_url='http://example.com/repository',
documentation_url='http://docs.example.com',
homepage_url='http://example.com',
min_ansible_version=ansible_version[:3], # x.y
dependencies=[],
))
skeleton_ignore_expressions = C.GALAXY_ROLE_SKELETON_IGNORE
obj_path = os.path.join(init_path, obj_name)
elif galaxy_type == 'collection':
namespace, collection_name = obj_name.split('.', 1)
inject_data.update(dict(
namespace=namespace,
collection_name=collection_name,
version='1.0.0',
readme='README.md',
authors=['your name <[email protected]>'],
license=['GPL-2.0-or-later'],
repository='http://example.com/repository',
documentation='http://docs.example.com',
homepage='http://example.com',
issues='http://example.com/issue/tracker',
build_ignore=[],
))
skeleton_ignore_expressions = C.GALAXY_COLLECTION_SKELETON_IGNORE
obj_path = os.path.join(init_path, namespace, collection_name)
b_obj_path = to_bytes(obj_path, errors='surrogate_or_strict')
if os.path.exists(b_obj_path):
if os.path.isfile(obj_path):
raise AnsibleError("- the path %s already exists, but is a file - aborting" % to_native(obj_path))
elif not force:
raise AnsibleError("- the directory %s already exists. "
"You can use --force to re-initialize this directory,\n"
"however it will reset any main.yml files that may have\n"
"been modified there already." % to_native(obj_path))
# delete the contents rather than the collection root in case init was run from the root (--init-path ../../)
for root, dirs, files in os.walk(b_obj_path, topdown=True):
for old_dir in dirs:
path = os.path.join(root, old_dir)
shutil.rmtree(path)
for old_file in files:
path = os.path.join(root, old_file)
os.unlink(path)
if obj_skeleton is not None:
own_skeleton = False
else:
own_skeleton = True
obj_skeleton = self.galaxy.default_role_skeleton_path
skeleton_ignore_expressions = ['^.*/.git_keep$']
obj_skeleton = os.path.expanduser(obj_skeleton)
skeleton_ignore_re = [re.compile(x) for x in skeleton_ignore_expressions]
if not os.path.exists(obj_skeleton):
raise AnsibleError("- the skeleton path '{0}' does not exist, cannot init {1}".format(
to_native(obj_skeleton), galaxy_type)
)
loader = DataLoader()
templar = Templar(loader, variables=inject_data)
# create role directory
if not os.path.exists(b_obj_path):
os.makedirs(b_obj_path)
for root, dirs, files in os.walk(obj_skeleton, topdown=True):
rel_root = os.path.relpath(root, obj_skeleton)
rel_dirs = rel_root.split(os.sep)
rel_root_dir = rel_dirs[0]
if galaxy_type == 'collection':
# A collection can contain templates in playbooks/*/templates and roles/*/templates
in_templates_dir = rel_root_dir in ['playbooks', 'roles'] and 'templates' in rel_dirs
else:
in_templates_dir = rel_root_dir == 'templates'
# Filter out ignored directory names
# Use [:] to mutate the list os.walk uses
dirs[:] = [d for d in dirs if not any(r.match(d) for r in skeleton_ignore_re)]
for f in files:
filename, ext = os.path.splitext(f)
if any(r.match(os.path.join(rel_root, f)) for r in skeleton_ignore_re):
continue
if galaxy_type == 'collection' and own_skeleton and rel_root == '.' and f == 'galaxy.yml.j2':
# Special use case for galaxy.yml.j2 in our own default collection skeleton. We build the options
# dynamically which requires special options to be set.
# The templated data's keys must match the key name but the inject data contains collection_name
# instead of name. We just make a copy and change the key back to name for this file.
template_data = inject_data.copy()
template_data['name'] = template_data.pop('collection_name')
meta_value = GalaxyCLI._get_skeleton_galaxy_yml(os.path.join(root, rel_root, f), template_data)
b_dest_file = to_bytes(os.path.join(obj_path, rel_root, filename), errors='surrogate_or_strict')
with open(b_dest_file, 'wb') as galaxy_obj:
galaxy_obj.write(to_bytes(meta_value, errors='surrogate_or_strict'))
elif ext == ".j2" and not in_templates_dir:
src_template = os.path.join(root, f)
dest_file = os.path.join(obj_path, rel_root, filename)
template_data = to_text(loader._get_file_contents(src_template)[0], errors='surrogate_or_strict')
b_rendered = to_bytes(templar.template(template_data), errors='surrogate_or_strict')
with open(dest_file, 'wb') as df:
df.write(b_rendered)
else:
f_rel_path = os.path.relpath(os.path.join(root, f), obj_skeleton)
shutil.copyfile(os.path.join(root, f), os.path.join(obj_path, f_rel_path))
for d in dirs:
b_dir_path = to_bytes(os.path.join(obj_path, rel_root, d), errors='surrogate_or_strict')
if not os.path.exists(b_dir_path):
os.makedirs(b_dir_path)
display.display("- %s %s was created successfully" % (galaxy_type.title(), obj_name))
def execute_info(self):
"""
prints out detailed information about an installed role as well as info available from the galaxy API.
"""
roles_path = context.CLIARGS['roles_path']
data = ''
for role in context.CLIARGS['args']:
role_info = {'path': roles_path}
gr = GalaxyRole(self.galaxy, self.api, role)
install_info = gr.install_info
if install_info:
if 'version' in install_info:
install_info['installed_version'] = install_info['version']
del install_info['version']
role_info.update(install_info)
if not context.CLIARGS['offline']:
remote_data = None
try:
remote_data = self.api.lookup_role_by_name(role, False)
except AnsibleError as e:
if e.http_code == 400 and 'Bad Request' in e.message:
# Role does not exist in Ansible Galaxy
data = u"- the role %s was not found" % role
break
raise AnsibleError("Unable to find info about '%s': %s" % (role, e))
if remote_data:
role_info.update(remote_data)
elif context.CLIARGS['offline'] and not gr._exists:
data = u"- the role %s was not found" % role
break
if gr.metadata:
role_info.update(gr.metadata)
req = RoleRequirement()
role_spec = req.role_yaml_parse({'role': role})
if role_spec:
role_info.update(role_spec)
data += self._display_role_info(role_info)
self.pager(data)
@with_collection_artifacts_manager
def execute_verify(self, artifacts_manager=None):
collections = context.CLIARGS['args']
search_paths = context.CLIARGS['collections_path']
ignore_errors = context.CLIARGS['ignore_errors']
local_verify_only = context.CLIARGS['offline']
requirements_file = context.CLIARGS['requirements']
signatures = context.CLIARGS['signatures']
if signatures is not None:
signatures = list(signatures)
requirements = self._require_one_of_collections_requirements(
collections, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)['collections']
resolved_paths = [validate_collection_path(GalaxyCLI._resolve_path(path)) for path in search_paths]
results = verify_collections(
requirements, resolved_paths,
self.api_servers, ignore_errors,
local_verify_only=local_verify_only,
artifacts_manager=artifacts_manager,
)
if any(result for result in results if not result.success):
return 1
return 0
@with_collection_artifacts_manager
def execute_install(self, artifacts_manager=None):
"""
Install one or more roles(``ansible-galaxy role install``), or one or more collections(``ansible-galaxy collection install``).
You can pass in a list (roles or collections) or use the file
option listed below (these are mutually exclusive). If you pass in a list, it
can be a name (which will be downloaded via the galaxy API and github), or it can be a local tar archive file.
:param artifacts_manager: Artifacts manager.
"""
install_items = context.CLIARGS['args']
requirements_file = context.CLIARGS['requirements']
collection_path = None
signatures = context.CLIARGS.get('signatures')
if signatures is not None:
signatures = list(signatures)
if requirements_file:
requirements_file = GalaxyCLI._resolve_path(requirements_file)
two_type_warning = "The requirements file '%s' contains {0}s which will be ignored. To install these {0}s " \
"run 'ansible-galaxy {0} install -r' or to install both at the same time run " \
"'ansible-galaxy install -r' without a custom install path." % to_text(requirements_file)
# TODO: Would be nice to share the same behaviour with args and -r in collections and roles.
collection_requirements = []
role_requirements = []
if context.CLIARGS['type'] == 'collection':
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['collections_path'])
requirements = self._require_one_of_collections_requirements(
install_items, requirements_file,
signatures=signatures,
artifacts_manager=artifacts_manager,
)
collection_requirements = requirements['collections']
if requirements['roles']:
display.vvv(two_type_warning.format('role'))
else:
if not install_items and requirements_file is None:
raise AnsibleOptionsError("- you must specify a user/role name or a roles file")
if requirements_file:
if not (requirements_file.endswith('.yaml') or requirements_file.endswith('.yml')):
raise AnsibleError("Invalid role requirements file, it must end with a .yml or .yaml extension")
galaxy_args = self._raw_args
will_install_collections = self._implicit_role and '-p' not in galaxy_args and '--roles-path' not in galaxy_args
requirements = self._parse_requirements_file(
requirements_file,
artifacts_manager=artifacts_manager,
validate_signature_options=will_install_collections,
)
role_requirements = requirements['roles']
# We can only install collections and roles at the same time if the type wasn't specified and the -p
# argument was not used. If collections are present in the requirements then at least display a msg.
if requirements['collections'] and (not self._implicit_role or '-p' in galaxy_args or
'--roles-path' in galaxy_args):
# We only want to display a warning if 'ansible-galaxy install -r ... -p ...'. Other cases the user
# was explicit about the type and shouldn't care that collections were skipped.
display_func = display.warning if self._implicit_role else display.vvv
display_func(two_type_warning.format('collection'))
else:
collection_path = self._get_default_collection_path()
collection_requirements = requirements['collections']
else:
# roles were specified directly, so we'll just go out grab them
# (and their dependencies, unless the user doesn't want us to).
for rname in context.CLIARGS['args']:
role = RoleRequirement.role_yaml_parse(rname.strip())
role_requirements.append(GalaxyRole(self.galaxy, self.api, **role))
if not role_requirements and not collection_requirements:
display.display("Skipping install, no requirements found")
return
if role_requirements:
display.display("Starting galaxy role install process")
self._execute_install_role(role_requirements)
if collection_requirements:
display.display("Starting galaxy collection install process")
# Collections can technically be installed even when ansible-galaxy is in role mode so we need to pass in
# the install path as context.CLIARGS['collections_path'] won't be set (default is calculated above).
self._execute_install_collection(
collection_requirements, collection_path,
artifacts_manager=artifacts_manager,
)
def _execute_install_collection(
self, requirements, path, artifacts_manager,
):
force = context.CLIARGS['force']
ignore_errors = context.CLIARGS['ignore_errors']
no_deps = context.CLIARGS['no_deps']
force_with_deps = context.CLIARGS['force_with_deps']
try:
disable_gpg_verify = context.CLIARGS['disable_gpg_verify']
except KeyError:
if self._implicit_role:
raise AnsibleError(
'Unable to properly parse command line arguments. Please use "ansible-galaxy collection install" '
'instead of "ansible-galaxy install".'
)
raise
# If `ansible-galaxy install` is used, collection-only options aren't available to the user and won't be in context.CLIARGS
allow_pre_release = context.CLIARGS.get('allow_pre_release', False)
upgrade = context.CLIARGS.get('upgrade', False)
collections_path = C.COLLECTIONS_PATHS
if len([p for p in collections_path if p.startswith(path)]) == 0:
display.warning("The specified collections path '%s' is not part of the configured Ansible "
"collections paths '%s'. The installed collection will not be picked up in an Ansible "
"run, unless within a playbook-adjacent collections directory." % (to_text(path), to_text(":".join(collections_path))))
output_path = validate_collection_path(path)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
if not os.path.exists(b_output_path):
os.makedirs(b_output_path)
install_collections(
requirements, output_path, self.api_servers, ignore_errors,
no_deps, force, force_with_deps, upgrade,
allow_pre_release=allow_pre_release,
artifacts_manager=artifacts_manager,
disable_gpg_verify=disable_gpg_verify,
)
return 0
def _execute_install_role(self, requirements):
role_file = context.CLIARGS['requirements']
no_deps = context.CLIARGS['no_deps']
force_deps = context.CLIARGS['force_with_deps']
force = context.CLIARGS['force'] or force_deps
for role in requirements:
# only process roles in roles files when names matches if given
if role_file and context.CLIARGS['args'] and role.name not in context.CLIARGS['args']:
display.vvv('Skipping role %s' % role.name)
continue
display.vvv('Processing role %s ' % role.name)
# query the galaxy API for the role data
if role.install_info is not None:
if role.install_info['version'] != role.version or force:
if force:
display.display('- changing role %s from %s to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
role.remove()
else:
display.warning('- %s (%s) is already installed - use --force to change version to %s' %
(role.name, role.install_info['version'], role.version or "unspecified"))
continue
else:
if not force:
display.display('- %s is already installed, skipping.' % str(role))
continue
try:
installed = role.install()
except AnsibleError as e:
display.warning(u"- %s was NOT installed successfully: %s " % (role.name, to_text(e)))
self.exit_without_ignore()
continue
# install dependencies, if we want them
if not no_deps and installed:
if not role.metadata:
# NOTE: the meta file is also required for installing the role, not just dependencies
display.warning("Meta file %s is empty. Skipping dependencies." % role.path)
else:
role_dependencies = role.metadata_dependencies + role.requirements
for dep in role_dependencies:
display.debug('Installing dep %s' % dep)
dep_req = RoleRequirement()
dep_info = dep_req.role_yaml_parse(dep)
dep_role = GalaxyRole(self.galaxy, self.api, **dep_info)
if '.' not in dep_role.name and '.' not in dep_role.src and dep_role.scm is None:
# we know we can skip this, as it's not going to
# be found on galaxy.ansible.com
continue
if dep_role.install_info is None:
if dep_role not in requirements:
display.display('- adding dependency: %s' % to_text(dep_role))
requirements.append(dep_role)
else:
display.display('- dependency %s already pending installation.' % dep_role.name)
else:
if dep_role.install_info['version'] != dep_role.version:
if force_deps:
display.display('- changing dependent role %s from %s to %s' %
(dep_role.name, dep_role.install_info['version'], dep_role.version or "unspecified"))
dep_role.remove()
requirements.append(dep_role)
else:
display.warning('- dependency %s (%s) from role %s differs from already installed version (%s), skipping' %
(to_text(dep_role), dep_role.version, role.name, dep_role.install_info['version']))
else:
if force_deps:
requirements.append(dep_role)
else:
display.display('- dependency %s is already installed, skipping.' % dep_role.name)
if not installed:
display.warning("- %s was NOT installed successfully." % role.name)
self.exit_without_ignore()
return 0
def execute_remove(self):
"""
removes the list of roles passed as arguments from the local system.
"""
if not context.CLIARGS['args']:
raise AnsibleOptionsError('- you must specify at least one role to remove.')
for role_name in context.CLIARGS['args']:
role = GalaxyRole(self.galaxy, self.api, role_name)
try:
if role.remove():
display.display('- successfully removed %s' % role_name)
else:
display.display('- %s is not installed, skipping.' % role_name)
except Exception as e:
raise AnsibleError("Failed to remove role %s: %s" % (role_name, to_native(e)))
return 0
def execute_list(self):
"""
List installed collections or roles
"""
if context.CLIARGS['type'] == 'role':
self.execute_list_role()
elif context.CLIARGS['type'] == 'collection':
self.execute_list_collection()
def execute_list_role(self):
"""
List all roles installed on the local system or a specific role
"""
path_found = False
role_found = False
warnings = []
roles_search_paths = context.CLIARGS['roles_path']
role_name = context.CLIARGS['role']
for path in roles_search_paths:
role_path = GalaxyCLI._resolve_path(path)
if os.path.isdir(path):
path_found = True
else:
warnings.append("- the configured path {0} does not exist.".format(path))
continue
if role_name:
# show the requested role, if it exists
gr = GalaxyRole(self.galaxy, self.api, role_name, path=os.path.join(role_path, role_name))
if os.path.isdir(gr.path):
role_found = True
display.display('# %s' % os.path.dirname(gr.path))
_display_role(gr)
break
warnings.append("- the role %s was not found" % role_name)
else:
if not os.path.exists(role_path):
warnings.append("- the configured path %s does not exist." % role_path)
continue
if not os.path.isdir(role_path):
warnings.append("- the configured path %s, exists, but it is not a directory." % role_path)
continue
display.display('# %s' % role_path)
path_files = os.listdir(role_path)
for path_file in path_files:
gr = GalaxyRole(self.galaxy, self.api, path_file, path=path)
if gr.metadata:
_display_role(gr)
# Do not warn if the role was found in any of the search paths
if role_found and role_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
return 0
@with_collection_artifacts_manager
def execute_list_collection(self, artifacts_manager=None):
"""
List all collections installed on the local system
:param artifacts_manager: Artifacts manager.
"""
if artifacts_manager is not None:
artifacts_manager.require_build_metadata = False
output_format = context.CLIARGS['output_format']
collections_search_paths = set(context.CLIARGS['collections_path'])
collection_name = context.CLIARGS['collection']
default_collections_path = AnsibleCollectionConfig.collection_paths
collections_in_paths = {}
warnings = []
path_found = False
collection_found = False
for path in collections_search_paths:
collection_path = GalaxyCLI._resolve_path(path)
if not os.path.exists(path):
if path in default_collections_path:
# don't warn for missing default paths
continue
warnings.append("- the configured path {0} does not exist.".format(collection_path))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
path_found = True
if collection_name:
# list a specific collection
validate_collection_name(collection_name)
namespace, collection = collection_name.split('.')
collection_path = validate_collection_path(collection_path)
b_collection_path = to_bytes(os.path.join(collection_path, namespace, collection), errors='surrogate_or_strict')
if not os.path.exists(b_collection_path):
warnings.append("- unable to find {0} in collection paths".format(collection_name))
continue
if not os.path.isdir(collection_path):
warnings.append("- the configured path {0}, exists, but it is not a directory.".format(collection_path))
continue
collection_found = True
try:
collection = Requirement.from_dir_path_as_unknown(
b_collection_path,
artifacts_manager,
)
except ValueError as val_err:
six.raise_from(AnsibleError(val_err), val_err)
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver}
}
continue
fqcn_width, version_width = _get_collection_widths([collection])
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
_display_collection(collection, fqcn_width, version_width)
else:
# list all collections
collection_path = validate_collection_path(path)
if os.path.isdir(collection_path):
display.vvv("Searching {0} for collections".format(collection_path))
collections = list(find_existing_collections(
collection_path, artifacts_manager,
))
else:
# There was no 'ansible_collections/' directory in the path, so there
# or no collections here.
display.vvv("No 'ansible_collections' directory found at {0}".format(collection_path))
continue
if not collections:
display.vvv("No collections found at {0}".format(collection_path))
continue
if output_format in {'yaml', 'json'}:
collections_in_paths[collection_path] = {
collection.fqcn: {'version': collection.ver} for collection in collections
}
continue
# Display header
fqcn_width, version_width = _get_collection_widths(collections)
_display_header(collection_path, 'Collection', 'Version', fqcn_width, version_width)
# Sort collections by the namespace and name
for collection in sorted(collections, key=to_text):
_display_collection(collection, fqcn_width, version_width)
# Do not warn if the specific collection was found in any of the search paths
if collection_found and collection_name:
warnings = []
for w in warnings:
display.warning(w)
if not path_found:
raise AnsibleOptionsError("- None of the provided paths were usable. Please specify a valid path with --{0}s-path".format(context.CLIARGS['type']))
if output_format == 'json':
display.display(json.dumps(collections_in_paths))
elif output_format == 'yaml':
display.display(yaml_dump(collections_in_paths))
return 0
def execute_publish(self):
"""
Publish a collection into Ansible Galaxy. Requires the path to the collection tarball to publish.
"""
collection_path = GalaxyCLI._resolve_path(context.CLIARGS['args'])
wait = context.CLIARGS['wait']
timeout = context.CLIARGS['import_timeout']
publish_collection(collection_path, self.api, wait, timeout)
def execute_search(self):
''' searches for roles on the Ansible Galaxy server'''
page_size = 1000
search = None
if context.CLIARGS['args']:
search = '+'.join(context.CLIARGS['args'])
if not search and not context.CLIARGS['platforms'] and not context.CLIARGS['galaxy_tags'] and not context.CLIARGS['author']:
raise AnsibleError("Invalid query. At least one search term, platform, galaxy tag or author must be provided.")
response = self.api.search_roles(search, platforms=context.CLIARGS['platforms'],
tags=context.CLIARGS['galaxy_tags'], author=context.CLIARGS['author'], page_size=page_size)
if response['count'] == 0:
display.display("No roles match your search.", color=C.COLOR_ERROR)
return 1
data = [u'']
if response['count'] > page_size:
data.append(u"Found %d roles matching your search. Showing first %s." % (response['count'], page_size))
else:
data.append(u"Found %d roles matching your search:" % response['count'])
max_len = []
for role in response['results']:
max_len.append(len(role['username'] + '.' + role['name']))
name_len = max(max_len)
format_str = u" %%-%ds %%s" % name_len
data.append(u'')
data.append(format_str % (u"Name", u"Description"))
data.append(format_str % (u"----", u"-----------"))
for role in response['results']:
data.append(format_str % (u'%s.%s' % (role['username'], role['name']), role['description']))
data = u'\n'.join(data)
self.pager(data)
return 0
def execute_import(self):
""" used to import a role into Ansible Galaxy """
colors = {
'INFO': 'normal',
'WARNING': C.COLOR_WARN,
'ERROR': C.COLOR_ERROR,
'SUCCESS': C.COLOR_OK,
'FAILED': C.COLOR_ERROR,
}
github_user = to_text(context.CLIARGS['github_user'], errors='surrogate_or_strict')
github_repo = to_text(context.CLIARGS['github_repo'], errors='surrogate_or_strict')
if context.CLIARGS['check_status']:
task = self.api.get_import_task(github_user=github_user, github_repo=github_repo)
else:
# Submit an import request
task = self.api.create_import_task(github_user, github_repo,
reference=context.CLIARGS['reference'],
role_name=context.CLIARGS['role_name'])
if len(task) > 1:
# found multiple roles associated with github_user/github_repo
display.display("WARNING: More than one Galaxy role associated with Github repo %s/%s." % (github_user, github_repo),
color='yellow')
display.display("The following Galaxy roles are being updated:" + u'\n', color=C.COLOR_CHANGED)
for t in task:
display.display('%s.%s' % (t['summary_fields']['role']['namespace'], t['summary_fields']['role']['name']), color=C.COLOR_CHANGED)
display.display(u'\nTo properly namespace this role, remove each of the above and re-import %s/%s from scratch' % (github_user, github_repo),
color=C.COLOR_CHANGED)
return 0
# found a single role as expected
display.display("Successfully submitted import request %d" % task[0]['id'])
if not context.CLIARGS['wait']:
display.display("Role name: %s" % task[0]['summary_fields']['role']['name'])
display.display("Repo: %s/%s" % (task[0]['github_user'], task[0]['github_repo']))
if context.CLIARGS['check_status'] or context.CLIARGS['wait']:
# Get the status of the import
msg_list = []
finished = False
while not finished:
task = self.api.get_import_task(task_id=task[0]['id'])
for msg in task[0]['summary_fields']['task_messages']:
if msg['id'] not in msg_list:
display.display(msg['message_text'], color=colors[msg['message_type']])
msg_list.append(msg['id'])
if task[0]['state'] in ['SUCCESS', 'FAILED']:
finished = True
else:
time.sleep(10)
return 0
def execute_setup(self):
""" Setup an integration from Github or Travis for Ansible Galaxy roles"""
if context.CLIARGS['setup_list']:
# List existing integration secrets
secrets = self.api.list_secrets()
if len(secrets) == 0:
# None found
display.display("No integrations found.")
return 0
display.display(u'\n' + "ID Source Repo", color=C.COLOR_OK)
display.display("---------- ---------- ----------", color=C.COLOR_OK)
for secret in secrets:
display.display("%-10s %-10s %s/%s" % (secret['id'], secret['source'], secret['github_user'],
secret['github_repo']), color=C.COLOR_OK)
return 0
if context.CLIARGS['remove_id']:
# Remove a secret
self.api.remove_secret(context.CLIARGS['remove_id'])
display.display("Secret removed. Integrations using this secret will not longer work.", color=C.COLOR_OK)
return 0
source = context.CLIARGS['source']
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
secret = context.CLIARGS['secret']
resp = self.api.add_secret(source, github_user, github_repo, secret)
display.display("Added integration for %s %s/%s" % (resp['source'], resp['github_user'], resp['github_repo']))
return 0
def execute_delete(self):
""" Delete a role from Ansible Galaxy. """
github_user = context.CLIARGS['github_user']
github_repo = context.CLIARGS['github_repo']
resp = self.api.delete_role(github_user, github_repo)
if len(resp['deleted_roles']) > 1:
display.display("Deleted the following roles:")
display.display("ID User Name")
display.display("------ --------------- ----------")
for role in resp['deleted_roles']:
display.display("%-8s %-15s %s" % (role.id, role.namespace, role.name))
display.display(resp['status'])
return 0
def main(args=None):
GalaxyCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
lib/ansible/galaxy/collection/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Installed collections management package."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import errno
import fnmatch
import functools
import json
import os
import queue
import re
import shutil
import stat
import sys
import tarfile
import tempfile
import textwrap
import threading
import time
import typing as t
from collections import namedtuple
from contextlib import contextmanager
from dataclasses import dataclass, fields as dc_fields
from hashlib import sha256
from io import BytesIO
from importlib.metadata import distribution
from itertools import chain
try:
from packaging.requirements import Requirement as PkgReq
except ImportError:
class PkgReq: # type: ignore[no-redef]
pass
HAS_PACKAGING = False
else:
HAS_PACKAGING = True
try:
from distlib.manifest import Manifest # type: ignore[import]
from distlib import DistlibException # type: ignore[import]
except ImportError:
HAS_DISTLIB = False
else:
HAS_DISTLIB = True
if t.TYPE_CHECKING:
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
ManifestKeysType = t.Literal[
'collection_info', 'file_manifest_file', 'format',
]
FileMetaKeysType = t.Literal[
'name',
'ftype',
'chksum_type',
'chksum_sha256',
'format',
]
CollectionInfoKeysType = t.Literal[
# collection meta:
'namespace', 'name', 'version',
'authors', 'readme',
'tags', 'description',
'license', 'license_file',
'dependencies',
'repository', 'documentation',
'homepage', 'issues',
# files meta:
FileMetaKeysType,
]
ManifestValueType = t.Dict[CollectionInfoKeysType, t.Union[int, str, t.List[str], t.Dict[str, str], None]]
CollectionManifestType = t.Dict[ManifestKeysType, ManifestValueType]
FileManifestEntryType = t.Dict[FileMetaKeysType, t.Union[str, int, None]]
FilesManifestType = t.Dict[t.Literal['files', 'format'], t.Union[t.List[FileManifestEntryType], int]]
import ansible.constants as C
from ansible.errors import AnsibleError
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
_consume_file,
_download_file,
_get_json_from_installed_dir,
_get_meta_from_src_dir,
_tarfile_extract,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.collection.gpg import (
run_gpg_verify,
parse_gpg_errors,
get_signature_from_source,
GPG_ERROR_MAP,
)
try:
from ansible.galaxy.dependency_resolution import (
build_collection_dependency_resolver,
)
from ansible.galaxy.dependency_resolution.errors import (
CollectionDependencyResolutionImpossible,
CollectionDependencyInconsistentCandidate,
)
from ansible.galaxy.dependency_resolution.providers import (
RESOLVELIB_VERSION,
RESOLVELIB_LOWERBOUND,
RESOLVELIB_UPPERBOUND,
)
except ImportError:
HAS_RESOLVELIB = False
else:
HAS_RESOLVELIB = True
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement, _is_installed_collection_dir,
)
from ansible.galaxy.dependency_resolution.versioning import meets_requirements
from ansible.plugins.loader import get_all_plugin_loaders
from ansible.module_utils.six import raise_from
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_dump
from ansible.utils.collection_loader import AnsibleCollectionRef
from ansible.utils.display import Display
from ansible.utils.hashing import secure_hash, secure_hash_s
from ansible.utils.sentinel import Sentinel
display = Display()
MANIFEST_FORMAT = 1
MANIFEST_FILENAME = 'MANIFEST.json'
ModifiedContent = namedtuple('ModifiedContent', ['filename', 'expected', 'installed'])
SIGNATURE_COUNT_RE = r"^(?P<strict>\+)?(?:(?P<count>\d+)|(?P<all>all))$"
@dataclass
class ManifestControl:
directives: list[str] = None
omit_default_directives: bool = False
def __post_init__(self):
# Allow a dict representing this dataclass to be splatted directly.
# Requires attrs to have a default value, so anything with a default
# of None is swapped for its, potentially mutable, default
for field in dc_fields(self):
if getattr(self, field.name) is None:
super().__setattr__(field.name, field.type())
class CollectionSignatureError(Exception):
def __init__(self, reasons=None, stdout=None, rc=None, ignore=False):
self.reasons = reasons
self.stdout = stdout
self.rc = rc
self.ignore = ignore
self._reason_wrapper = None
def _report_unexpected(self, collection_name):
return (
f"Unexpected error for '{collection_name}': "
f"GnuPG signature verification failed with the return code {self.rc} and output {self.stdout}"
)
def _report_expected(self, collection_name):
header = f"Signature verification failed for '{collection_name}' (return code {self.rc}):"
return header + self._format_reasons()
def _format_reasons(self):
if self._reason_wrapper is None:
self._reason_wrapper = textwrap.TextWrapper(
initial_indent=" * ", # 6 chars
subsequent_indent=" ", # 6 chars
)
wrapped_reasons = [
'\n'.join(self._reason_wrapper.wrap(reason))
for reason in self.reasons
]
return '\n' + '\n'.join(wrapped_reasons)
def report(self, collection_name):
if self.reasons:
return self._report_expected(collection_name)
return self._report_unexpected(collection_name)
# FUTURE: expose actual verify result details for a collection on this object, maybe reimplement as dataclass on py3.8+
class CollectionVerifyResult:
def __init__(self, collection_name): # type: (str) -> None
self.collection_name = collection_name # type: str
self.success = True # type: bool
def verify_local_collection(local_collection, remote_collection, artifacts_manager):
# type: (Candidate, t.Optional[Candidate], ConcreteArtifactsManager) -> CollectionVerifyResult
"""Verify integrity of the locally installed collection.
:param local_collection: Collection being checked.
:param remote_collection: Upstream collection (optional, if None, only verify local artifact)
:param artifacts_manager: Artifacts manager.
:return: a collection verify result object.
"""
result = CollectionVerifyResult(local_collection.fqcn)
b_collection_path = to_bytes(local_collection.src, errors='surrogate_or_strict')
display.display("Verifying '{coll!s}'.".format(coll=local_collection))
display.display(
u"Installed collection found at '{path!s}'".
format(path=to_text(local_collection.src)),
)
modified_content = [] # type: list[ModifiedContent]
verify_local_only = remote_collection is None
# partial away the local FS detail so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_installed_dir, b_collection_path)
get_hash_from_validation_source = functools.partial(_get_file_hash, b_collection_path)
if not verify_local_only:
# Compare installed version versus requirement version
if local_collection.ver != remote_collection.ver:
err = (
"{local_fqcn!s} has the version '{local_ver!s}' but "
"is being compared to '{remote_ver!s}'".format(
local_fqcn=local_collection.fqcn,
local_ver=local_collection.ver,
remote_ver=remote_collection.ver,
)
)
display.display(err)
result.success = False
return result
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
signatures = list(local_collection.signatures)
if verify_local_only and local_collection.source_info is not None:
signatures = [info["signature"] for info in local_collection.source_info["signatures"]] + signatures
elif not verify_local_only and remote_collection.signatures:
signatures = list(remote_collection.signatures) + signatures
keyring_configured = artifacts_manager.keyring is not None
if not keyring_configured and signatures:
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server. "
"Configure a keyring for ansible-galaxy to verify "
"the origin of the collection. "
"Skipping signature verification."
)
elif keyring_configured:
if not verify_file_signatures(
local_collection.fqcn,
manifest_file,
signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
):
result.success = False
return result
display.vvvv(f"GnuPG signature verification succeeded, verifying contents of {local_collection}")
if verify_local_only:
# since we're not downloading this, just seed it with the value from disk
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
elif keyring_configured and remote_collection.signatures:
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
else:
# fetch remote
b_temp_tar_path = ( # NOTE: AnsibleError is raised on URLError
artifacts_manager.get_artifact_path
if remote_collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(remote_collection)
display.vvv(
u"Remote collection cached as '{path!s}'".format(path=to_text(b_temp_tar_path))
)
# partial away the tarball details so we can just ask generically during validation
get_json_from_validation_source = functools.partial(_get_json_from_tar_file, b_temp_tar_path)
get_hash_from_validation_source = functools.partial(_get_tar_file_hash, b_temp_tar_path)
# Verify the downloaded manifest hash matches the installed copy before verifying the file manifest
manifest_hash = get_hash_from_validation_source(MANIFEST_FILENAME)
_verify_file_hash(b_collection_path, MANIFEST_FILENAME, manifest_hash, modified_content)
display.display('MANIFEST.json hash: {manifest_hash}'.format(manifest_hash=manifest_hash))
manifest = get_json_from_validation_source(MANIFEST_FILENAME)
# Use the manifest to verify the file manifest checksum
file_manifest_data = manifest['file_manifest_file']
file_manifest_filename = file_manifest_data['name']
expected_hash = file_manifest_data['chksum_%s' % file_manifest_data['chksum_type']]
# Verify the file manifest before using it to verify individual files
_verify_file_hash(b_collection_path, file_manifest_filename, expected_hash, modified_content)
file_manifest = get_json_from_validation_source(file_manifest_filename)
collection_dirs = set()
collection_files = {
os.path.join(b_collection_path, b'MANIFEST.json'),
os.path.join(b_collection_path, b'FILES.json'),
}
# Use the file manifest to verify individual file checksums
for manifest_data in file_manifest['files']:
name = manifest_data['name']
if manifest_data['ftype'] == 'file':
collection_files.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
expected_hash = manifest_data['chksum_%s' % manifest_data['chksum_type']]
_verify_file_hash(b_collection_path, name, expected_hash, modified_content)
if manifest_data['ftype'] == 'dir':
collection_dirs.add(
os.path.join(b_collection_path, to_bytes(name, errors='surrogate_or_strict'))
)
# Find any paths not in the FILES.json
for root, dirs, files in os.walk(b_collection_path):
for name in files:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_files:
modified_content.append(
ModifiedContent(filename=path, expected='the file does not exist', installed='the file exists')
)
for name in dirs:
full_path = os.path.join(root, name)
path = to_text(full_path[len(b_collection_path) + 1::], errors='surrogate_or_strict')
if full_path not in collection_dirs:
modified_content.append(
ModifiedContent(filename=path, expected='the directory does not exist', installed='the directory exists')
)
if modified_content:
result.success = False
display.display(
'Collection {fqcn!s} contains modified content '
'in the following files:'.
format(fqcn=to_text(local_collection.fqcn)),
)
for content_change in modified_content:
display.display(' %s' % content_change.filename)
display.v(" Expected: %s\n Found: %s" % (content_change.expected, content_change.installed))
else:
what = "are internally consistent with its manifest" if verify_local_only else "match the remote collection"
display.display(
"Successfully verified that checksums for '{coll!s}' {what!s}.".
format(coll=local_collection, what=what),
)
return result
def verify_file_signatures(fqcn, manifest_file, detached_signatures, keyring, required_successful_count, ignore_signature_errors):
# type: (str, str, list[str], str, str, list[str]) -> bool
successful = 0
error_messages = []
signature_count_requirements = re.match(SIGNATURE_COUNT_RE, required_successful_count).groupdict()
strict = signature_count_requirements['strict'] or False
require_all = signature_count_requirements['all']
require_count = signature_count_requirements['count']
if require_count is not None:
require_count = int(require_count)
for signature in detached_signatures:
signature = to_text(signature, errors='surrogate_or_strict')
try:
verify_file_signature(manifest_file, signature, keyring, ignore_signature_errors)
except CollectionSignatureError as error:
if error.ignore:
# Do not include ignored errors in either the failed or successful count
continue
error_messages.append(error.report(fqcn))
else:
successful += 1
if require_all:
continue
if successful == require_count:
break
if strict and not successful:
verified = False
display.display(f"Signature verification failed for '{fqcn}': no successful signatures")
elif require_all:
verified = not error_messages
if not verified:
display.display(f"Signature verification failed for '{fqcn}': some signatures failed")
else:
verified = not detached_signatures or require_count == successful
if not verified:
display.display(f"Signature verification failed for '{fqcn}': fewer successful signatures than required")
if not verified:
for msg in error_messages:
display.vvvv(msg)
return verified
def verify_file_signature(manifest_file, detached_signature, keyring, ignore_signature_errors):
# type: (str, str, str, list[str]) -> None
"""Run the gpg command and parse any errors. Raises CollectionSignatureError on failure."""
gpg_result, gpg_verification_rc = run_gpg_verify(manifest_file, detached_signature, keyring, display)
if gpg_result:
errors = parse_gpg_errors(gpg_result)
try:
error = next(errors)
except StopIteration:
pass
else:
reasons = []
ignored_reasons = 0
for error in chain([error], errors):
# Get error status (dict key) from the class (dict value)
status_code = list(GPG_ERROR_MAP.keys())[list(GPG_ERROR_MAP.values()).index(error.__class__)]
if status_code in ignore_signature_errors:
ignored_reasons += 1
reasons.append(error.get_gpg_error_description())
ignore = len(reasons) == ignored_reasons
raise CollectionSignatureError(reasons=set(reasons), stdout=gpg_result, rc=gpg_verification_rc, ignore=ignore)
if gpg_verification_rc:
raise CollectionSignatureError(stdout=gpg_result, rc=gpg_verification_rc)
# No errors and rc is 0, verify was successful
return None
def build_collection(u_collection_path, u_output_path, force):
# type: (str, str, bool) -> str
"""Creates the Ansible collection artifact in a .tar.gz file.
:param u_collection_path: The path to the collection to build. This should be the directory that contains the
galaxy.yml file.
:param u_output_path: The path to create the collection build artifact. This should be a directory.
:param force: Whether to overwrite an existing collection build artifact or fail.
:return: The path to the collection build artifact.
"""
b_collection_path = to_bytes(u_collection_path, errors='surrogate_or_strict')
try:
collection_meta = _get_meta_from_src_dir(b_collection_path)
except LookupError as lookup_err:
raise_from(AnsibleError(to_native(lookup_err)), lookup_err)
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], # type: ignore[arg-type]
collection_meta['name'], # type: ignore[arg-type]
collection_meta['build_ignore'], # type: ignore[arg-type]
collection_meta['manifest'], # type: ignore[arg-type]
)
artifact_tarball_file_name = '{ns!s}-{name!s}-{ver!s}.tar.gz'.format(
name=collection_meta['name'],
ns=collection_meta['namespace'],
ver=collection_meta['version'],
)
b_collection_output = os.path.join(
to_bytes(u_output_path),
to_bytes(artifact_tarball_file_name, errors='surrogate_or_strict'),
)
if os.path.exists(b_collection_output):
if os.path.isdir(b_collection_output):
raise AnsibleError("The output collection artifact '%s' already exists, "
"but is a directory - aborting" % to_native(b_collection_output))
elif not force:
raise AnsibleError("The file '%s' already exists. You can use --force to re-create "
"the collection artifact." % to_native(b_collection_output))
collection_output = _build_collection_tar(b_collection_path, b_collection_output, collection_manifest, file_manifest)
return collection_output
def download_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
no_deps, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> None
"""Download Ansible collections as their tarball from a Galaxy server to the path specified and creates a requirements
file of the downloaded requirements to be used for an install.
:param collections: The collections to download, should be a list of tuples with (name, requirement, Galaxy Server).
:param output_path: The path to download the collections to.
:param apis: A list of GalaxyAPIs to query when search for a collection.
:param validate_certs: Whether to validate the certificate if downloading a tarball from a non-Galaxy host.
:param no_deps: Ignore any collection dependencies and only download the base requirements.
:param allow_pre_release: Do not ignore pre-release versions when selecting the latest.
"""
with _display_progress("Process download dependency map"):
dep_map = _resolve_depenency_map(
set(collections),
galaxy_apis=apis,
preferred_candidates=None,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=False,
# Avoid overhead getting signatures since they are not currently applicable to downloaded collections
include_signatures=False,
)
b_output_path = to_bytes(output_path, errors='surrogate_or_strict')
requirements = []
with _display_progress(
"Starting collection download process to '{path!s}'".
format(path=output_path),
):
for fqcn, concrete_coll_pin in dep_map.copy().items(): # FIXME: move into the provider
if concrete_coll_pin.is_virtual:
display.display(
'Virtual collection {coll!s} is not downloadable'.
format(coll=to_text(concrete_coll_pin)),
)
continue
display.display(
u"Downloading collection '{coll!s}' to '{path!s}'".
format(coll=to_text(concrete_coll_pin), path=to_text(b_output_path)),
)
b_src_path = (
artifacts_manager.get_artifact_path
if concrete_coll_pin.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(concrete_coll_pin)
b_dest_path = os.path.join(
b_output_path,
os.path.basename(b_src_path),
)
if concrete_coll_pin.is_dir:
b_dest_path = to_bytes(
build_collection(
to_text(b_src_path, errors='surrogate_or_strict'),
to_text(output_path, errors='surrogate_or_strict'),
force=True,
),
errors='surrogate_or_strict',
)
else:
shutil.copy(to_native(b_src_path), to_native(b_dest_path))
display.display(
"Collection '{coll!s}' was downloaded successfully".
format(coll=concrete_coll_pin),
)
requirements.append({
# FIXME: Consider using a more specific upgraded format
# FIXME: having FQCN in the name field, with src field
# FIXME: pointing to the file path, and explicitly set
# FIXME: type. If version and name are set, it'd
# FIXME: perform validation against the actual metadata
# FIXME: in the artifact src points at.
'name': to_native(os.path.basename(b_dest_path)),
'version': concrete_coll_pin.ver,
})
requirements_path = os.path.join(output_path, 'requirements.yml')
b_requirements_path = to_bytes(
requirements_path, errors='surrogate_or_strict',
)
display.display(
u'Writing requirements.yml file of downloaded collections '
"to '{path!s}'".format(path=to_text(requirements_path)),
)
yaml_bytes = to_bytes(
yaml_dump({'collections': requirements}),
errors='surrogate_or_strict',
)
with open(b_requirements_path, mode='wb') as req_fd:
req_fd.write(yaml_bytes)
def publish_collection(collection_path, api, wait, timeout):
"""Publish an Ansible collection tarball into an Ansible Galaxy server.
:param collection_path: The path to the collection tarball to publish.
:param api: A GalaxyAPI to publish the collection to.
:param wait: Whether to wait until the import process is complete.
:param timeout: The time in seconds to wait for the import process to finish, 0 is indefinite.
"""
import_uri = api.publish_collection(collection_path)
if wait:
# Galaxy returns a url fragment which differs between v2 and v3. The second to last entry is
# always the task_id, though.
# v2: {"task": "https://galaxy-dev.ansible.com/api/v2/collection-imports/35573/"}
# v3: {"task": "/api/automation-hub/v3/imports/collections/838d1308-a8f4-402c-95cb-7823f3806cd8/"}
task_id = None
for path_segment in reversed(import_uri.split('/')):
if path_segment:
task_id = path_segment
break
if not task_id:
raise AnsibleError("Publishing the collection did not return valid task info. Cannot wait for task status. Returned task info: '%s'" % import_uri)
with _display_progress(
"Collection has been published to the Galaxy server "
"{api.name!s} {api.api_server!s}".format(api=api),
):
api.wait_import_task(task_id, timeout)
display.display("Collection has been successfully published and imported to the Galaxy server %s %s"
% (api.name, api.api_server))
else:
display.display("Collection has been pushed to the Galaxy server %s %s, not waiting until import has "
"completed due to --no-wait being set. Import task results can be found at %s"
% (api.name, api.api_server, import_uri))
def install_collections(
collections, # type: t.Iterable[Requirement]
output_path, # type: str
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
no_deps, # type: bool
force, # type: bool
force_deps, # type: bool
upgrade, # type: bool
allow_pre_release, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
disable_gpg_verify, # type: bool
): # type: (...) -> None
"""Install Ansible collections to the path specified.
:param collections: The collections to install.
:param output_path: The path to install the collections to.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param validate_certs: Whether to validate the certificates if downloading a tarball.
:param ignore_errors: Whether to ignore any errors when installing the collection.
:param no_deps: Ignore any collection dependencies and only install the base requirements.
:param force: Re-install a collection if it has already been installed.
:param force_deps: Re-install a collection as well as its dependencies if they have already been installed.
"""
existing_collections = {
Requirement(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in find_existing_collections(output_path, artifacts_manager)
}
unsatisfied_requirements = set(
chain.from_iterable(
(
Requirement.from_dir_path(sub_coll, artifacts_manager)
for sub_coll in (
artifacts_manager.
get_direct_collection_dependencies(install_req).
keys()
)
)
if install_req.is_subdirs else (install_req, )
for install_req in collections
),
)
requested_requirements_names = {req.fqcn for req in unsatisfied_requirements}
# NOTE: Don't attempt to reevaluate already installed deps
# NOTE: unless `--force` or `--force-with-deps` is passed
unsatisfied_requirements -= set() if force or force_deps else {
req
for req in unsatisfied_requirements
for exs in existing_collections
if req.fqcn == exs.fqcn and meets_requirements(exs.ver, req.ver)
}
if not unsatisfied_requirements and not upgrade:
display.display(
'Nothing to do. All requested collections are already '
'installed. If you want to reinstall them, '
'consider using `--force`.'
)
return
# FIXME: This probably needs to be improved to
# FIXME: properly match differing src/type.
existing_non_requested_collections = {
coll for coll in existing_collections
if coll.fqcn not in requested_requirements_names
}
preferred_requirements = (
[] if force_deps
else existing_non_requested_collections if force
else existing_collections
)
preferred_collections = {
# NOTE: No need to include signatures if the collection is already installed
Candidate(coll.fqcn, coll.ver, coll.src, coll.type, None)
for coll in preferred_requirements
}
with _display_progress("Process install dependency map"):
dependency_map = _resolve_depenency_map(
collections,
galaxy_apis=apis,
preferred_candidates=preferred_collections,
concrete_artifacts_manager=artifacts_manager,
no_deps=no_deps,
allow_pre_release=allow_pre_release,
upgrade=upgrade,
include_signatures=not disable_gpg_verify,
)
keyring_exists = artifacts_manager.keyring is not None
with _display_progress("Starting collection install process"):
for fqcn, concrete_coll_pin in dependency_map.items():
if concrete_coll_pin.is_virtual:
display.vvvv(
"'{coll!s}' is virtual, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if concrete_coll_pin in preferred_collections:
display.display(
"'{coll!s}' is already installed, skipping.".
format(coll=to_text(concrete_coll_pin)),
)
continue
if not disable_gpg_verify and concrete_coll_pin.signatures and not keyring_exists:
# Duplicate warning msgs are not displayed
display.warning(
"The GnuPG keyring used for collection signature "
"verification was not configured but signatures were "
"provided by the Galaxy server to verify authenticity. "
"Configure a keyring for ansible-galaxy to use "
"or disable signature verification. "
"Skipping signature verification."
)
try:
install(concrete_coll_pin, output_path, artifacts_manager)
except AnsibleError as err:
if ignore_errors:
display.warning(
'Failed to install collection {coll!s} but skipping '
'due to --ignore-errors being set. Error: {error!s}'.
format(
coll=to_text(concrete_coll_pin),
error=to_text(err),
)
)
else:
raise
# NOTE: imported in ansible.cli.galaxy
def validate_collection_name(name): # type: (str) -> str
"""Validates the collection name as an input from the user or a requirements file fit the requirements.
:param name: The input name with optional range specifier split by ':'.
:return: The input value, required for argparse validation.
"""
collection, dummy, dummy = name.partition(':')
if AnsibleCollectionRef.is_valid_collection_name(collection):
return name
raise AnsibleError("Invalid collection name '%s', "
"name must be in the format <namespace>.<collection>. \n"
"Please make sure namespace and collection name contains "
"characters from [a-zA-Z0-9_] only." % name)
# NOTE: imported in ansible.cli.galaxy
def validate_collection_path(collection_path): # type: (str) -> str
"""Ensure a given path ends with 'ansible_collections'
:param collection_path: The path that should end in 'ansible_collections'
:return: collection_path ending in 'ansible_collections' if it does not already.
"""
if os.path.split(collection_path)[1] != 'ansible_collections':
return os.path.join(collection_path, 'ansible_collections')
return collection_path
def verify_collections(
collections, # type: t.Iterable[Requirement]
search_paths, # type: t.Iterable[str]
apis, # type: t.Iterable[GalaxyAPI]
ignore_errors, # type: bool
local_verify_only, # type: bool
artifacts_manager, # type: ConcreteArtifactsManager
): # type: (...) -> list[CollectionVerifyResult]
r"""Verify the integrity of locally installed collections.
:param collections: The collections to check.
:param search_paths: Locations for the local collection lookup.
:param apis: A list of GalaxyAPIs to query when searching for a collection.
:param ignore_errors: Whether to ignore any errors when verifying the collection.
:param local_verify_only: When True, skip downloads and only verify local manifests.
:param artifacts_manager: Artifacts manager.
:return: list of CollectionVerifyResult objects describing the results of each collection verification
"""
results = [] # type: list[CollectionVerifyResult]
api_proxy = MultiGalaxyAPIProxy(apis, artifacts_manager)
with _display_progress():
for collection in collections:
try:
if collection.is_concrete_artifact:
raise AnsibleError(
message="'{coll_type!s}' type is not supported. "
'The format namespace.name is expected.'.
format(coll_type=collection.type)
)
# NOTE: Verify local collection exists before
# NOTE: downloading its source artifact from
# NOTE: a galaxy server.
default_err = 'Collection %s is not installed in any of the collection paths.' % collection.fqcn
for search_path in search_paths:
b_search_path = to_bytes(
os.path.join(
search_path,
collection.namespace, collection.name,
),
errors='surrogate_or_strict',
)
if not os.path.isdir(b_search_path):
continue
if not _is_installed_collection_dir(b_search_path):
default_err = (
"Collection %s does not have a MANIFEST.json. "
"A MANIFEST.json is expected if the collection has been built "
"and installed via ansible-galaxy" % collection.fqcn
)
continue
local_collection = Candidate.from_dir_path(
b_search_path, artifacts_manager,
)
supplemental_signatures = [
get_signature_from_source(source, display)
for source in collection.signature_sources or []
]
local_collection = Candidate(
local_collection.fqcn,
local_collection.ver,
local_collection.src,
local_collection.type,
signatures=frozenset(supplemental_signatures),
)
break
else:
raise AnsibleError(message=default_err)
if local_verify_only:
remote_collection = None
else:
signatures = api_proxy.get_signatures(local_collection)
signatures.extend([
get_signature_from_source(source, display)
for source in collection.signature_sources or []
])
remote_collection = Candidate(
collection.fqcn,
collection.ver if collection.ver != '*'
else local_collection.ver,
None, 'galaxy',
frozenset(signatures),
)
# Download collection on a galaxy server for comparison
try:
# NOTE: If there are no signatures, trigger the lookup. If found,
# NOTE: it'll cache download URL and token in artifact manager.
# NOTE: If there are no Galaxy server signatures, only user-provided signature URLs,
# NOTE: those alone validate the MANIFEST.json and the remote collection is not downloaded.
# NOTE: The remote MANIFEST.json is only used in verification if there are no signatures.
if not signatures and not collection.signature_sources:
api_proxy.get_collection_version_metadata(
remote_collection,
)
except AnsibleError as e: # FIXME: does this actually emit any errors?
# FIXME: extract the actual message and adjust this:
expected_error_msg = (
'Failed to find collection {coll.fqcn!s}:{coll.ver!s}'.
format(coll=collection)
)
if e.message == expected_error_msg:
raise AnsibleError(
'Failed to find remote collection '
"'{coll!s}' on any of the galaxy servers".
format(coll=collection)
)
raise
result = verify_local_collection(local_collection, remote_collection, artifacts_manager)
results.append(result)
except AnsibleError as err:
if ignore_errors:
display.warning(
"Failed to verify collection '{coll!s}' but skipping "
'due to --ignore-errors being set. '
'Error: {err!s}'.
format(coll=collection, err=to_text(err)),
)
else:
raise
return results
@contextmanager
def _tempdir():
b_temp_path = tempfile.mkdtemp(dir=to_bytes(C.DEFAULT_LOCAL_TMP, errors='surrogate_or_strict'))
try:
yield b_temp_path
finally:
shutil.rmtree(b_temp_path)
@contextmanager
def _display_progress(msg=None):
config_display = C.GALAXY_DISPLAY_PROGRESS
display_wheel = sys.stdout.isatty() if config_display is None else config_display
global display
if msg is not None:
display.display(msg)
if not display_wheel:
yield
return
def progress(display_queue, actual_display):
actual_display.debug("Starting display_progress display thread")
t = threading.current_thread()
while True:
for c in "|/-\\":
actual_display.display(c + "\b", newline=False)
time.sleep(0.1)
# Display a message from the main thread
while True:
try:
method, args, kwargs = display_queue.get(block=False, timeout=0.1)
except queue.Empty:
break
else:
func = getattr(actual_display, method)
func(*args, **kwargs)
if getattr(t, "finish", False):
actual_display.debug("Received end signal for display_progress display thread")
return
class DisplayThread(object):
def __init__(self, display_queue):
self.display_queue = display_queue
def __getattr__(self, attr):
def call_display(*args, **kwargs):
self.display_queue.put((attr, args, kwargs))
return call_display
# Temporary override the global display class with our own which add the calls to a queue for the thread to call.
old_display = display
try:
display_queue = queue.Queue()
display = DisplayThread(display_queue)
t = threading.Thread(target=progress, args=(display_queue, old_display))
t.daemon = True
t.start()
try:
yield
finally:
t.finish = True
t.join()
except Exception:
# The exception is re-raised so we can sure the thread is finished and not using the display anymore
raise
finally:
display = old_display
def _verify_file_hash(b_path, filename, expected_hash, error_queue):
b_file_path = to_bytes(os.path.join(to_text(b_path), filename), errors='surrogate_or_strict')
if not os.path.isfile(b_file_path):
actual_hash = None
else:
with open(b_file_path, mode='rb') as file_object:
actual_hash = _consume_file(file_object)
if expected_hash != actual_hash:
error_queue.append(ModifiedContent(filename=filename, expected=expected_hash, installed=actual_hash))
def _make_manifest():
return {
'files': [
{
'name': '.',
'ftype': 'dir',
'chksum_type': None,
'chksum_sha256': None,
'format': MANIFEST_FORMAT,
},
],
'format': MANIFEST_FORMAT,
}
def _make_entry(name, ftype, chksum_type='sha256', chksum=None):
return {
'name': name,
'ftype': ftype,
'chksum_type': chksum_type if chksum else None,
f'chksum_{chksum_type}': chksum,
'format': MANIFEST_FORMAT
}
def _build_files_manifest(b_collection_path, namespace, name, ignore_patterns, manifest_control):
# type: (bytes, str, str, list[str], dict[str, t.Any]) -> FilesManifestType
if ignore_patterns and manifest_control is not Sentinel:
raise AnsibleError('"build_ignore" and "manifest" are mutually exclusive')
if manifest_control is not Sentinel:
return _build_files_manifest_distlib(
b_collection_path,
namespace,
name,
manifest_control,
)
return _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns)
def _build_files_manifest_distlib(b_collection_path, namespace, name, manifest_control):
# type: (bytes, str, str, dict[str, t.Any]) -> FilesManifestType
if not HAS_DISTLIB:
raise AnsibleError('Use of "manifest" requires the python "distlib" library')
if manifest_control is None:
manifest_control = {}
try:
control = ManifestControl(**manifest_control)
except TypeError as ex:
raise AnsibleError(f'Invalid "manifest" provided: {ex}')
if not is_sequence(control.directives):
raise AnsibleError(f'"manifest.directives" must be a list, got: {control.directives.__class__.__name__}')
if not isinstance(control.omit_default_directives, bool):
raise AnsibleError(
'"manifest.omit_default_directives" is expected to be a boolean, got: '
f'{control.omit_default_directives.__class__.__name__}'
)
if control.omit_default_directives and not control.directives:
raise AnsibleError(
'"manifest.omit_default_directives" was set to True, but no directives were defined '
'in "manifest.directives". This would produce an empty collection artifact.'
)
directives = []
if control.omit_default_directives:
directives.extend(control.directives)
else:
directives.extend([
'include meta/*.yml',
'include *.txt *.md *.rst COPYING LICENSE',
'recursive-include tests **',
'recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt',
'recursive-include roles **.yml **.yaml **.json **.j2',
'recursive-include playbooks **.yml **.yaml **.json',
'recursive-include changelogs **.yml **.yaml',
'recursive-include plugins */**.py',
])
plugins = set(l.package.split('.')[-1] for d, l in get_all_plugin_loaders())
for plugin in sorted(plugins):
if plugin in ('modules', 'module_utils'):
continue
elif plugin in C.DOCUMENTABLE_PLUGINS:
directives.append(
f'recursive-include plugins/{plugin} **.yml **.yaml'
)
directives.extend([
'recursive-include plugins/modules **.ps1 **.yml **.yaml',
'recursive-include plugins/module_utils **.ps1 **.psm1 **.cs',
])
directives.extend(control.directives)
directives.extend([
f'exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json {namespace}-{name}-*.tar.gz',
'recursive-exclude tests/output **',
'global-exclude /.* /__pycache__',
])
display.vvv('Manifest Directives:')
display.vvv(textwrap.indent('\n'.join(directives), ' '))
u_collection_path = to_text(b_collection_path, errors='surrogate_or_strict')
m = Manifest(u_collection_path)
for directive in directives:
try:
m.process_directive(directive)
except DistlibException as e:
raise AnsibleError(f'Invalid manifest directive: {e}')
except Exception as e:
raise AnsibleError(f'Unknown error processing manifest directive: {e}')
manifest = _make_manifest()
for abs_path in m.sorted(wantdirs=True):
rel_path = os.path.relpath(abs_path, u_collection_path)
if os.path.isdir(abs_path):
manifest_entry = _make_entry(rel_path, 'dir')
else:
manifest_entry = _make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(abs_path, hash_func=sha256)
)
manifest['files'].append(manifest_entry)
return manifest
def _build_files_manifest_walk(b_collection_path, namespace, name, ignore_patterns):
# type: (bytes, str, str, list[str]) -> FilesManifestType
# We always ignore .pyc and .retry files as well as some well known version control directories. The ignore
# patterns can be extended by the build_ignore key in galaxy.yml
b_ignore_patterns = [
b'MANIFEST.json',
b'FILES.json',
b'galaxy.yml',
b'galaxy.yaml',
b'.git',
b'*.pyc',
b'*.retry',
b'tests/output', # Ignore ansible-test result output directory.
to_bytes('{0}-{1}-*.tar.gz'.format(namespace, name)), # Ignores previously built artifacts in the root dir.
]
b_ignore_patterns += [to_bytes(p) for p in ignore_patterns]
b_ignore_dirs = frozenset([b'CVS', b'.bzr', b'.hg', b'.git', b'.svn', b'__pycache__', b'.tox'])
manifest = _make_manifest()
def _walk(b_path, b_top_level_dir):
for b_item in os.listdir(b_path):
b_abs_path = os.path.join(b_path, b_item)
b_rel_base_dir = b'' if b_path == b_top_level_dir else b_path[len(b_top_level_dir) + 1:]
b_rel_path = os.path.join(b_rel_base_dir, b_item)
rel_path = to_text(b_rel_path, errors='surrogate_or_strict')
if os.path.isdir(b_abs_path):
if any(b_item == b_path for b_path in b_ignore_dirs) or \
any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
if os.path.islink(b_abs_path):
b_link_target = os.path.realpath(b_abs_path)
if not _is_child_path(b_link_target, b_top_level_dir):
display.warning("Skipping '%s' as it is a symbolic link to a directory outside the collection"
% to_text(b_abs_path))
continue
manifest['files'].append(_make_entry(rel_path, 'dir'))
if not os.path.islink(b_abs_path):
_walk(b_abs_path, b_top_level_dir)
else:
if any(fnmatch.fnmatch(b_rel_path, b_pattern) for b_pattern in b_ignore_patterns):
display.vvv("Skipping '%s' for collection build" % to_text(b_abs_path))
continue
# Handling of file symlinks occur in _build_collection_tar, the manifest for a symlink is the same for
# a normal file.
manifest['files'].append(
_make_entry(
rel_path,
'file',
chksum_type='sha256',
chksum=secure_hash(b_abs_path, hash_func=sha256)
)
)
_walk(b_collection_path, b_collection_path)
return manifest
# FIXME: accept a dict produced from `galaxy.yml` instead of separate args
def _build_manifest(namespace, name, version, authors, readme, tags, description, license_file,
dependencies, repository, documentation, homepage, issues, **kwargs):
manifest = {
'collection_info': {
'namespace': namespace,
'name': name,
'version': version,
'authors': authors,
'readme': readme,
'tags': tags,
'description': description,
'license': kwargs['license'],
'license_file': license_file or None, # Handle galaxy.yml having an empty string (None)
'dependencies': dependencies,
'repository': repository,
'documentation': documentation,
'homepage': homepage,
'issues': issues,
},
'file_manifest_file': {
'name': 'FILES.json',
'ftype': 'file',
'chksum_type': 'sha256',
'chksum_sha256': None, # Filled out in _build_collection_tar
'format': MANIFEST_FORMAT
},
'format': MANIFEST_FORMAT,
}
return manifest
def _build_collection_tar(
b_collection_path, # type: bytes
b_tar_path, # type: bytes
collection_manifest, # type: CollectionManifestType
file_manifest, # type: FilesManifestType
): # type: (...) -> str
"""Build a tar.gz collection artifact from the manifest data."""
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
with _tempdir() as b_temp_path:
b_tar_filepath = os.path.join(b_temp_path, os.path.basename(b_tar_path))
with tarfile.open(b_tar_filepath, mode='w:gz') as tar_file:
# Add the MANIFEST.json and FILES.json file to the archive
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_io = BytesIO(b)
tar_info = tarfile.TarInfo(name)
tar_info.size = len(b)
tar_info.mtime = int(time.time())
tar_info.mode = 0o0644
tar_file.addfile(tarinfo=tar_info, fileobj=b_io)
for file_info in file_manifest['files']: # type: ignore[union-attr]
if file_info['name'] == '.':
continue
# arcname expects a native string, cannot be bytes
filename = to_native(file_info['name'], errors='surrogate_or_strict')
b_src_path = os.path.join(b_collection_path, to_bytes(filename, errors='surrogate_or_strict'))
def reset_stat(tarinfo):
if tarinfo.type != tarfile.SYMTYPE:
existing_is_exec = tarinfo.mode & stat.S_IXUSR
tarinfo.mode = 0o0755 if existing_is_exec or tarinfo.isdir() else 0o0644
tarinfo.uid = tarinfo.gid = 0
tarinfo.uname = tarinfo.gname = ''
return tarinfo
if os.path.islink(b_src_path):
b_link_target = os.path.realpath(b_src_path)
if _is_child_path(b_link_target, b_collection_path):
b_rel_path = os.path.relpath(b_link_target, start=os.path.dirname(b_src_path))
tar_info = tarfile.TarInfo(filename)
tar_info.type = tarfile.SYMTYPE
tar_info.linkname = to_native(b_rel_path, errors='surrogate_or_strict')
tar_info = reset_stat(tar_info)
tar_file.addfile(tarinfo=tar_info)
continue
# Dealing with a normal file, just add it by name.
tar_file.add(
to_native(os.path.realpath(b_src_path)),
arcname=filename,
recursive=False,
filter=reset_stat,
)
shutil.copy(to_native(b_tar_filepath), to_native(b_tar_path))
collection_name = "%s.%s" % (collection_manifest['collection_info']['namespace'],
collection_manifest['collection_info']['name'])
tar_path = to_text(b_tar_path)
display.display(u'Created collection for %s at %s' % (collection_name, tar_path))
return tar_path
def _build_collection_dir(b_collection_path, b_collection_output, collection_manifest, file_manifest):
"""Build a collection directory from the manifest data.
This should follow the same pattern as _build_collection_tar.
"""
os.makedirs(b_collection_output, mode=0o0755)
files_manifest_json = to_bytes(json.dumps(file_manifest, indent=True), errors='surrogate_or_strict')
collection_manifest['file_manifest_file']['chksum_sha256'] = secure_hash_s(files_manifest_json, hash_func=sha256)
collection_manifest_json = to_bytes(json.dumps(collection_manifest, indent=True), errors='surrogate_or_strict')
# Write contents to the files
for name, b in [(MANIFEST_FILENAME, collection_manifest_json), ('FILES.json', files_manifest_json)]:
b_path = os.path.join(b_collection_output, to_bytes(name, errors='surrogate_or_strict'))
with open(b_path, 'wb') as file_obj, BytesIO(b) as b_io:
shutil.copyfileobj(b_io, file_obj)
os.chmod(b_path, 0o0644)
base_directories = []
for file_info in sorted(file_manifest['files'], key=lambda x: x['name']):
if file_info['name'] == '.':
continue
src_file = os.path.join(b_collection_path, to_bytes(file_info['name'], errors='surrogate_or_strict'))
dest_file = os.path.join(b_collection_output, to_bytes(file_info['name'], errors='surrogate_or_strict'))
existing_is_exec = os.stat(src_file).st_mode & stat.S_IXUSR
mode = 0o0755 if existing_is_exec else 0o0644
if os.path.isdir(src_file):
mode = 0o0755
base_directories.append(src_file)
os.mkdir(dest_file, mode)
else:
shutil.copyfile(src_file, dest_file)
os.chmod(dest_file, mode)
collection_output = to_text(b_collection_output)
return collection_output
def find_existing_collections(path, artifacts_manager):
"""Locate all collections under a given path.
:param path: Collection dirs layout search path.
:param artifacts_manager: Artifacts manager.
"""
b_path = to_bytes(path, errors='surrogate_or_strict')
# FIXME: consider using `glob.glob()` to simplify looping
for b_namespace in os.listdir(b_path):
b_namespace_path = os.path.join(b_path, b_namespace)
if os.path.isfile(b_namespace_path):
continue
# FIXME: consider feeding b_namespace_path to Candidate.from_dir_path to get subdirs automatically
for b_collection in os.listdir(b_namespace_path):
b_collection_path = os.path.join(b_namespace_path, b_collection)
if not os.path.isdir(b_collection_path):
continue
try:
req = Candidate.from_dir_path_as_unknown(b_collection_path, artifacts_manager)
except ValueError as val_err:
raise_from(AnsibleError(val_err), val_err)
display.vvv(
u"Found installed collection {coll!s} at '{path!s}'".
format(coll=to_text(req), path=to_text(req.src))
)
yield req
def install(collection, path, artifacts_manager): # FIXME: mv to dataclasses?
# type: (Candidate, str, ConcreteArtifactsManager) -> None
"""Install a collection under a given path.
:param collection: Collection to be installed.
:param path: Collection dirs layout path.
:param artifacts_manager: Artifacts manager.
"""
b_artifact_path = (
artifacts_manager.get_artifact_path if collection.is_concrete_artifact
else artifacts_manager.get_galaxy_artifact_path
)(collection)
collection_path = os.path.join(path, collection.namespace, collection.name)
b_collection_path = to_bytes(collection_path, errors='surrogate_or_strict')
display.display(
u"Installing '{coll!s}' to '{path!s}'".
format(coll=to_text(collection), path=collection_path),
)
if os.path.exists(b_collection_path):
shutil.rmtree(b_collection_path)
if collection.is_dir:
install_src(collection, b_artifact_path, b_collection_path, artifacts_manager)
else:
install_artifact(
b_artifact_path,
b_collection_path,
artifacts_manager._b_working_directory,
collection.signatures,
artifacts_manager.keyring,
artifacts_manager.required_successful_signature_count,
artifacts_manager.ignore_signature_errors,
)
if (collection.is_online_index_pointer and isinstance(collection.src, GalaxyAPI)):
write_source_metadata(
collection,
b_collection_path,
artifacts_manager
)
display.display(
'{coll!s} was installed successfully'.
format(coll=to_text(collection)),
)
def write_source_metadata(collection, b_collection_path, artifacts_manager):
# type: (Candidate, bytes, ConcreteArtifactsManager) -> None
source_data = artifacts_manager.get_galaxy_artifact_source_info(collection)
b_yaml_source_data = to_bytes(yaml_dump(source_data), errors='surrogate_or_strict')
b_info_dest = collection.construct_galaxy_info_path(b_collection_path)
b_info_dir = os.path.split(b_info_dest)[0]
if os.path.exists(b_info_dir):
shutil.rmtree(b_info_dir)
try:
os.mkdir(b_info_dir, mode=0o0755)
with open(b_info_dest, mode='w+b') as fd:
fd.write(b_yaml_source_data)
os.chmod(b_info_dest, 0o0644)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
if os.path.isdir(b_info_dir):
shutil.rmtree(b_info_dir)
raise
def verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
# type: (str, list[str], str, str, list[str]) -> None
failed_verify = False
coll_path_parts = to_text(manifest_file, errors='surrogate_or_strict').split(os.path.sep)
collection_name = '%s.%s' % (coll_path_parts[-3], coll_path_parts[-2]) # get 'ns' and 'coll' from /path/to/ns/coll/MANIFEST.json
if not verify_file_signatures(collection_name, manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors):
raise AnsibleError(f"Not installing {collection_name} because GnuPG signature verification failed.")
display.vvvv(f"GnuPG signature verification succeeded for {collection_name}")
def install_artifact(b_coll_targz_path, b_collection_path, b_temp_path, signatures, keyring, required_signature_count, ignore_signature_errors):
"""Install a collection from tarball under a given path.
:param b_coll_targz_path: Collection tarball to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_temp_path: Temporary dir path.
:param signatures: frozenset of signatures to verify the MANIFEST.json
:param keyring: The keyring used during GPG verification
:param required_signature_count: The number of signatures that must successfully verify the collection
:param ignore_signature_errors: GPG errors to ignore during signature verification
"""
try:
with tarfile.open(b_coll_targz_path, mode='r') as collection_tar:
# Verify the signature on the MANIFEST.json before extracting anything else
_extract_tar_file(collection_tar, MANIFEST_FILENAME, b_collection_path, b_temp_path)
if keyring is not None:
manifest_file = os.path.join(to_text(b_collection_path, errors='surrogate_or_strict'), MANIFEST_FILENAME)
verify_artifact_manifest(manifest_file, signatures, keyring, required_signature_count, ignore_signature_errors)
files_member_obj = collection_tar.getmember('FILES.json')
with _tarfile_extract(collection_tar, files_member_obj) as (dummy, files_obj):
files = json.loads(to_text(files_obj.read(), errors='surrogate_or_strict'))
_extract_tar_file(collection_tar, 'FILES.json', b_collection_path, b_temp_path)
for file_info in files['files']:
file_name = file_info['name']
if file_name == '.':
continue
if file_info['ftype'] == 'file':
_extract_tar_file(collection_tar, file_name, b_collection_path, b_temp_path,
expected_hash=file_info['chksum_sha256'])
else:
_extract_tar_dir(collection_tar, file_name, b_collection_path)
except Exception:
# Ensure we don't leave the dir behind in case of a failure.
shutil.rmtree(b_collection_path)
b_namespace_path = os.path.dirname(b_collection_path)
if not os.listdir(b_namespace_path):
os.rmdir(b_namespace_path)
raise
def install_src(collection, b_collection_path, b_collection_output_path, artifacts_manager):
r"""Install the collection from source control into given dir.
Generates the Ansible collection artifact data from a galaxy.yml and
installs the artifact to a directory.
This should follow the same pattern as build_collection, but instead
of creating an artifact, install it.
:param collection: Collection to be installed.
:param b_collection_path: Collection dirs layout path.
:param b_collection_output_path: The installation directory for the \
collection artifact.
:param artifacts_manager: Artifacts manager.
:raises AnsibleError: If no collection metadata found.
"""
collection_meta = artifacts_manager.get_direct_collection_meta(collection)
if 'build_ignore' not in collection_meta: # installed collection, not src
# FIXME: optimize this? use a different process? copy instead of build?
collection_meta['build_ignore'] = []
collection_manifest = _build_manifest(**collection_meta)
file_manifest = _build_files_manifest(
b_collection_path,
collection_meta['namespace'], collection_meta['name'],
collection_meta['build_ignore'],
collection_meta['manifest'],
)
collection_output_path = _build_collection_dir(
b_collection_path, b_collection_output_path,
collection_manifest, file_manifest,
)
display.display(
'Created collection for {coll!s} at {path!s}'.
format(coll=collection, path=collection_output_path)
)
def _extract_tar_dir(tar, dirname, b_dest):
""" Extracts a directory from a collection tar. """
member_names = [to_native(dirname, errors='surrogate_or_strict')]
# Create list of members with and without trailing separator
if not member_names[-1].endswith(os.path.sep):
member_names.append(member_names[-1] + os.path.sep)
# Try all of the member names and stop on the first one that are able to successfully get
for member in member_names:
try:
tar_member = tar.getmember(member)
except KeyError:
continue
break
else:
# If we still can't find the member, raise a nice error.
raise AnsibleError("Unable to extract '%s' from collection" % to_native(member, errors='surrogate_or_strict'))
b_dir_path = os.path.join(b_dest, to_bytes(dirname, errors='surrogate_or_strict'))
b_parent_path = os.path.dirname(b_dir_path)
try:
os.makedirs(b_parent_path, mode=0o0755)
except OSError as e:
if e.errno != errno.EEXIST:
raise
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dir_path):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(dirname), b_link_path))
os.symlink(b_link_path, b_dir_path)
else:
if not os.path.isdir(b_dir_path):
os.mkdir(b_dir_path, 0o0755)
def _extract_tar_file(tar, filename, b_dest, b_temp_path, expected_hash=None):
""" Extracts a file from a collection tar. """
with _get_tar_file_member(tar, filename) as (tar_member, tar_obj):
if tar_member.type == tarfile.SYMTYPE:
actual_hash = _consume_file(tar_obj)
else:
with tempfile.NamedTemporaryFile(dir=b_temp_path, delete=False) as tmpfile_obj:
actual_hash = _consume_file(tar_obj, tmpfile_obj)
if expected_hash and actual_hash != expected_hash:
raise AnsibleError("Checksum mismatch for '%s' inside collection at '%s'"
% (to_native(filename, errors='surrogate_or_strict'), to_native(tar.name)))
b_dest_filepath = os.path.abspath(os.path.join(b_dest, to_bytes(filename, errors='surrogate_or_strict')))
b_parent_dir = os.path.dirname(b_dest_filepath)
if not _is_child_path(b_parent_dir, b_dest):
raise AnsibleError("Cannot extract tar entry '%s' as it will be placed outside the collection directory"
% to_native(filename, errors='surrogate_or_strict'))
if not os.path.exists(b_parent_dir):
# Seems like Galaxy does not validate if all file entries have a corresponding dir ftype entry. This check
# makes sure we create the parent directory even if it wasn't set in the metadata.
os.makedirs(b_parent_dir, mode=0o0755)
if tar_member.type == tarfile.SYMTYPE:
b_link_path = to_bytes(tar_member.linkname, errors='surrogate_or_strict')
if not _is_child_path(b_link_path, b_dest, link_name=b_dest_filepath):
raise AnsibleError("Cannot extract symlink '%s' in collection: path points to location outside of "
"collection '%s'" % (to_native(filename), b_link_path))
os.symlink(b_link_path, b_dest_filepath)
else:
shutil.move(to_bytes(tmpfile_obj.name, errors='surrogate_or_strict'), b_dest_filepath)
# Default to rw-r--r-- and only add execute if the tar file has execute.
tar_member = tar.getmember(to_native(filename, errors='surrogate_or_strict'))
new_mode = 0o644
if stat.S_IMODE(tar_member.mode) & stat.S_IXUSR:
new_mode |= 0o0111
os.chmod(b_dest_filepath, new_mode)
def _get_tar_file_member(tar, filename):
n_filename = to_native(filename, errors='surrogate_or_strict')
try:
member = tar.getmember(n_filename)
except KeyError:
raise AnsibleError("Collection tar at '%s' does not contain the expected file '%s'." % (
to_native(tar.name),
n_filename))
return _tarfile_extract(tar, member)
def _get_json_from_tar_file(b_path, filename):
file_contents = ''
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
bufsize = 65536
data = tar_obj.read(bufsize)
while data:
file_contents += to_text(data)
data = tar_obj.read(bufsize)
return json.loads(file_contents)
def _get_tar_file_hash(b_path, filename):
with tarfile.open(b_path, mode='r') as collection_tar:
with _get_tar_file_member(collection_tar, filename) as (dummy, tar_obj):
return _consume_file(tar_obj)
def _get_file_hash(b_path, filename): # type: (bytes, str) -> str
filepath = os.path.join(b_path, to_bytes(filename, errors='surrogate_or_strict'))
with open(filepath, 'rb') as fp:
return _consume_file(fp)
def _is_child_path(path, parent_path, link_name=None):
""" Checks that path is a path within the parent_path specified. """
b_path = to_bytes(path, errors='surrogate_or_strict')
if link_name and not os.path.isabs(b_path):
# If link_name is specified, path is the source of the link and we need to resolve the absolute path.
b_link_dir = os.path.dirname(to_bytes(link_name, errors='surrogate_or_strict'))
b_path = os.path.abspath(os.path.join(b_link_dir, b_path))
b_parent_path = to_bytes(parent_path, errors='surrogate_or_strict')
return b_path == b_parent_path or b_path.startswith(b_parent_path + to_bytes(os.path.sep))
def _resolve_depenency_map(
requested_requirements, # type: t.Iterable[Requirement]
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
preferred_candidates, # type: t.Iterable[Candidate] | None
no_deps, # type: bool
allow_pre_release, # type: bool
upgrade, # type: bool
include_signatures, # type: bool
): # type: (...) -> dict[str, Candidate]
"""Return the resolved dependency map."""
if not HAS_RESOLVELIB:
raise AnsibleError("Failed to import resolvelib, check that a supported version is installed")
if not HAS_PACKAGING:
raise AnsibleError("Failed to import packaging, check that a supported version is installed")
req = None
try:
dist = distribution('ansible-core')
except Exception:
pass
else:
req = next((rr for r in (dist.requires or []) if (rr := PkgReq(r)).name == 'resolvelib'), None)
finally:
if req is None:
# TODO: replace the hardcoded versions with a warning if the dist info is missing
# display.warning("Unable to find 'ansible-core' distribution requirements to verify the resolvelib version is supported.")
if not RESOLVELIB_LOWERBOUND <= RESOLVELIB_VERSION < RESOLVELIB_UPPERBOUND:
raise AnsibleError(
f"ansible-galaxy requires resolvelib<{RESOLVELIB_UPPERBOUND.vstring},>={RESOLVELIB_LOWERBOUND.vstring}"
)
elif not req.specifier.contains(RESOLVELIB_VERSION.vstring):
raise AnsibleError(f"ansible-galaxy requires {req.name}{req.specifier}")
collection_dep_resolver = build_collection_dependency_resolver(
galaxy_apis=galaxy_apis,
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=requested_requirements,
preferred_candidates=preferred_candidates,
with_deps=not no_deps,
with_pre_releases=allow_pre_release,
upgrade=upgrade,
include_signatures=include_signatures,
)
try:
return collection_dep_resolver.resolve(
requested_requirements,
max_rounds=2000000, # NOTE: same constant pip uses
).mapping
except CollectionDependencyResolutionImpossible as dep_exc:
conflict_causes = (
'* {req.fqcn!s}:{req.ver!s} ({dep_origin!s})'.format(
req=req_inf.requirement,
dep_origin='direct request'
if req_inf.parent is None
else 'dependency of {parent!s}'.
format(parent=req_inf.parent),
)
for req_inf in dep_exc.causes
)
error_msg_lines = list(chain(
(
'Failed to resolve the requested '
'dependencies map. Could not satisfy the following '
'requirements:',
),
conflict_causes,
))
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except CollectionDependencyInconsistentCandidate as dep_exc:
parents = [
"%s.%s:%s" % (p.namespace, p.name, p.ver)
for p in dep_exc.criterion.iter_parent()
if p is not None
]
error_msg_lines = [
(
'Failed to resolve the requested dependencies map. '
'Got the candidate {req.fqcn!s}:{req.ver!s} ({dep_origin!s}) '
'which didn\'t satisfy all of the following requirements:'.
format(
req=dep_exc.candidate,
dep_origin='direct request'
if not parents else 'dependency of {parent!s}'.
format(parent=', '.join(parents))
)
)
]
for req in dep_exc.criterion.iter_requirement():
error_msg_lines.append(
'* {req.fqcn!s}:{req.ver!s}'.format(req=req)
)
raise raise_from( # NOTE: Leading "raise" is a hack for mypy bug #9717
AnsibleError('\n'.join(error_msg_lines)),
dep_exc,
)
except ValueError as exc:
raise AnsibleError(to_native(exc)) from exc
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
lib/ansible/galaxy/collection/galaxy_api_proxy.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""A facade for interfacing with multiple Galaxy instances."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import typing as t
if t.TYPE_CHECKING:
from ansible.galaxy.api import CollectionVersionMetadata
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate, Requirement,
)
from ansible.galaxy.api import GalaxyAPI, GalaxyError
from ansible.module_utils._text import to_text
from ansible.utils.display import Display
display = Display()
class MultiGalaxyAPIProxy:
"""A proxy that abstracts talking to multiple Galaxy instances."""
def __init__(self, apis, concrete_artifacts_manager):
# type: (t.Iterable[GalaxyAPI], ConcreteArtifactsManager) -> None
"""Initialize the target APIs list."""
self._apis = apis
self._concrete_art_mgr = concrete_artifacts_manager
def _get_collection_versions(self, requirement):
# type: (Requirement) -> t.Iterator[tuple[GalaxyAPI, str]]
"""Helper for get_collection_versions.
Yield api, version pairs for all APIs,
and reraise the last error if no valid API was found.
"""
found_api = False
last_error = None # type: Exception | None
api_lookup_order = (
(requirement.src, )
if isinstance(requirement.src, GalaxyAPI)
else self._apis
)
for api in api_lookup_order:
try:
versions = api.get_collection_versions(requirement.namespace, requirement.name)
except GalaxyError as api_err:
last_error = api_err
except Exception as unknown_err:
display.warning(
"Skipping Galaxy server {server!s}. "
"Got an unexpected error when getting "
"available versions of collection {fqcn!s}: {err!s}".
format(
server=api.api_server,
fqcn=requirement.fqcn,
err=to_text(unknown_err),
)
)
last_error = unknown_err
else:
found_api = True
for version in versions:
yield api, version
if not found_api and last_error is not None:
raise last_error
def get_collection_versions(self, requirement):
# type: (Requirement) -> t.Iterable[tuple[str, GalaxyAPI]]
"""Get a set of unique versions for FQCN on Galaxy servers."""
if requirement.is_concrete_artifact:
return {
(
self._concrete_art_mgr.
get_direct_collection_version(requirement),
requirement.src,
),
}
api_lookup_order = (
(requirement.src, )
if isinstance(requirement.src, GalaxyAPI)
else self._apis
)
return set(
(version, api)
for api, version in self._get_collection_versions(
requirement,
)
)
def get_collection_version_metadata(self, collection_candidate):
# type: (Candidate) -> CollectionVersionMetadata
"""Retrieve collection metadata of a given candidate."""
api_lookup_order = (
(collection_candidate.src, )
if isinstance(collection_candidate.src, GalaxyAPI)
else self._apis
)
last_err: t.Optional[Exception]
for api in api_lookup_order:
try:
version_metadata = api.get_collection_version_metadata(
collection_candidate.namespace,
collection_candidate.name,
collection_candidate.ver,
)
except GalaxyError as api_err:
last_err = api_err
except Exception as unknown_err:
# `verify` doesn't use `get_collection_versions` since the version is already known.
# Do the same as `install` and `download` by trying all APIs before failing.
# Warn for debugging purposes, since the Galaxy server may be unexpectedly down.
last_err = unknown_err
display.warning(
"Skipping Galaxy server {server!s}. "
"Got an unexpected error when getting "
"available versions of collection {fqcn!s}: {err!s}".
format(
server=api.api_server,
fqcn=collection_candidate.fqcn,
err=to_text(unknown_err),
)
)
else:
self._concrete_art_mgr.save_collection_source(
collection_candidate,
version_metadata.download_url,
version_metadata.artifact_sha256,
api.token,
version_metadata.signatures_url,
version_metadata.signatures,
)
return version_metadata
raise last_err
def get_collection_dependencies(self, collection_candidate):
# type: (Candidate) -> dict[str, str]
# FIXME: return Requirement instances instead?
"""Retrieve collection dependencies of a given candidate."""
if collection_candidate.is_concrete_artifact:
return (
self.
_concrete_art_mgr.
get_direct_collection_dependencies
)(collection_candidate)
return (
self.
get_collection_version_metadata(collection_candidate).
dependencies
)
def get_signatures(self, collection_candidate):
# type: (Candidate) -> list[str]
namespace = collection_candidate.namespace
name = collection_candidate.name
version = collection_candidate.ver
last_err = None # type: Exception | None
api_lookup_order = (
(collection_candidate.src, )
if isinstance(collection_candidate.src, GalaxyAPI)
else self._apis
)
for api in api_lookup_order:
try:
return api.get_collection_signatures(namespace, name, version)
except GalaxyError as api_err:
last_err = api_err
except Exception as unknown_err:
# Warn for debugging purposes, since the Galaxy server may be unexpectedly down.
last_err = unknown_err
display.warning(
"Skipping Galaxy server {server!s}. "
"Got an unexpected error when getting "
"available versions of collection {fqcn!s}: {err!s}".
format(
server=api.api_server,
fqcn=collection_candidate.fqcn,
err=to_text(unknown_err),
)
)
if last_err:
raise last_err
return []
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
lib/ansible/galaxy/dependency_resolution/__init__.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2020-2021, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
"""Dependency resolution machinery."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import typing as t
if t.TYPE_CHECKING:
from ansible.galaxy.api import GalaxyAPI
from ansible.galaxy.collection.concrete_artifact_manager import (
ConcreteArtifactsManager,
)
from ansible.galaxy.dependency_resolution.dataclasses import (
Candidate,
Requirement,
)
from ansible.galaxy.collection.galaxy_api_proxy import MultiGalaxyAPIProxy
from ansible.galaxy.dependency_resolution.providers import CollectionDependencyProvider
from ansible.galaxy.dependency_resolution.reporters import CollectionDependencyReporter
from ansible.galaxy.dependency_resolution.resolvers import CollectionDependencyResolver
def build_collection_dependency_resolver(
galaxy_apis, # type: t.Iterable[GalaxyAPI]
concrete_artifacts_manager, # type: ConcreteArtifactsManager
user_requirements, # type: t.Iterable[Requirement]
preferred_candidates=None, # type: t.Iterable[Candidate]
with_deps=True, # type: bool
with_pre_releases=False, # type: bool
upgrade=False, # type: bool
include_signatures=True, # type: bool
): # type: (...) -> CollectionDependencyResolver
"""Return a collection dependency resolver.
The returned instance will have a ``resolve()`` method for
further consumption.
"""
return CollectionDependencyResolver(
CollectionDependencyProvider(
apis=MultiGalaxyAPIProxy(galaxy_apis, concrete_artifacts_manager),
concrete_artifacts_manager=concrete_artifacts_manager,
user_requirements=user_requirements,
preferred_candidates=preferred_candidates,
with_deps=with_deps,
with_pre_releases=with_pre_releases,
upgrade=upgrade,
include_signatures=include_signatures,
),
CollectionDependencyReporter(),
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
test/integration/targets/ansible-galaxy-collection/tasks/install_offline.yml
|
- set_fact:
init_dir: "{{ galaxy_dir }}/offline/setup"
build_dir: "{{ galaxy_dir }}/offline/build"
install_dir: "{{ galaxy_dir }}/offline/collections"
- name: create test directories
file:
path: "{{ item }}"
state: directory
loop:
- "{{ init_dir }}"
- "{{ build_dir }}"
- "{{ install_dir }}"
- name: test installing a tarfile with an installed dependency offline
block:
- name: init two collections
command: ansible-galaxy collection init ns.{{ item }} --init-path {{ init_dir }}
loop:
- coll1
- coll2
- name: add one collection as the dependency of the other
lineinfile:
path: "{{ galaxy_dir }}/offline/setup/ns/coll1/galaxy.yml"
regexp: "^dependencies:*"
line: "dependencies: {'ns.coll2': '1.0.0'}"
- name: build both collections
command: ansible-galaxy collection build {{ init_dir }}/ns/{{ item }}
args:
chdir: "{{ build_dir }}"
loop:
- coll1
- coll2
- name: install the dependency from the tarfile
command: ansible-galaxy collection install {{ build_dir }}/ns-coll2-1.0.0.tar.gz -p {{ install_dir }} -s offline
- name: install the tarfile with the installed dependency
command: ansible-galaxy collection install {{ build_dir }}/ns-coll1-1.0.0.tar.gz -p {{ install_dir }} -s offline
always:
- name: clean up test directories
file:
path: "{{ item }}"
state: absent
loop:
- "{{ init_dir }}"
- "{{ build_dir }}"
- "{{ install_dir }}"
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
test/integration/targets/ansible-galaxy-collection/templates/ansible.cfg.j2
|
[galaxy]
# Ensures subsequent unstable reruns don't use the cached information causing another failure
cache_dir={{ remote_tmp_dir }}/galaxy_cache
server_list=offline,pulp_v2,pulp_v3,galaxy_ng,secondary
[galaxy_server.offline]
url=https://test-hub.demolab.local/api/galaxy/content/api/
[galaxy_server.pulp_v2]
url={{ pulp_server }}published/api/
username={{ pulp_user }}
password={{ pulp_password }}
[galaxy_server.pulp_v3]
url={{ pulp_server }}published/api/
v3=true
username={{ pulp_user }}
password={{ pulp_password }}
[galaxy_server.galaxy_ng]
url={{ galaxy_ng_server }}
token={{ galaxy_ng_token.json.token }}
[galaxy_server.secondary]
url={{ pulp_server }}secondary/api/
v3=true
username={{ pulp_user }}
password={{ pulp_password }}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
test/integration/targets/ansible-galaxy-collection/vars/main.yml
|
galaxy_verbosity: "{{ '' if not ansible_verbosity else '-' ~ ('v' * ansible_verbosity) }}"
gpg_homedir: "{{ galaxy_dir }}/gpg"
supported_resolvelib_versions:
- "0.5.3" # Oldest supported
- "0.6.0"
- "0.7.0"
- "0.8.0"
unsupported_resolvelib_versions:
- "0.2.0" # Fails on import
- "0.5.1"
pulp_repositories:
- published
- secondary
publish_namespaces:
- ansible_test
collection_list:
# Scenario to test out pre-release being ignored unless explicitly set and version pagination.
- namespace: namespace1
name: name1
version: 0.0.1
- namespace: namespace1
name: name1
version: 0.0.2
- namespace: namespace1
name: name1
version: 0.0.3
- namespace: namespace1
name: name1
version: 0.0.4
- namespace: namespace1
name: name1
version: 0.0.5
- namespace: namespace1
name: name1
version: 0.0.6
- namespace: namespace1
name: name1
version: 0.0.7
- namespace: namespace1
name: name1
version: 0.0.8
- namespace: namespace1
name: name1
version: 0.0.9
- namespace: namespace1
name: name1
version: 0.0.10
- namespace: namespace1
name: name1
version: 0.1.0
- namespace: namespace1
name: name1
version: 1.0.0
- namespace: namespace1
name: name1
version: 1.0.9
- namespace: namespace1
name: name1
version: 1.1.0-beta.1
# Pad out number of namespaces for pagination testing
- namespace: namespace2
name: name
- namespace: namespace3
name: name
- namespace: namespace4
name: name
- namespace: namespace5
name: name
- namespace: namespace6
name: name
- namespace: namespace7
name: name
- namespace: namespace8
name: name
- namespace: namespace9
name: name
# Complex dependency resolution
- namespace: parent_dep
name: parent_collection
version: 0.0.1
dependencies:
child_dep.child_collection: '<0.5.0'
- namespace: parent_dep
name: parent_collection
version: 1.0.0
dependencies:
child_dep.child_collection: '>=0.5.0,<1.0.0'
- namespace: parent_dep
name: parent_collection
version: 1.1.0
dependencies:
child_dep.child_collection: '>=0.9.9,<=1.0.0'
- namespace: parent_dep
name: parent_collection
version: 2.0.0
dependencies:
child_dep.child_collection: '>=1.0.0'
- namespace: parent_dep2
name: parent_collection
dependencies:
child_dep.child_collection: '0.5.0'
- namespace: child_dep
name: child_collection
version: 0.4.0
- namespace: child_dep
name: child_collection
version: 0.5.0
- namespace: child_dep
name: child_collection
version: 0.9.9
dependencies:
child_dep.child_dep2: '!=1.2.3'
- namespace: child_dep
name: child_collection
version: 1.0.0
dependencies:
child_dep.child_dep2: '!=1.2.3'
- namespace: child_dep
name: child_dep2
version: 1.2.2
- namespace: child_dep
name: child_dep2
version: 1.2.3
# Dep resolution failure
- namespace: fail_namespace
name: fail_collection
version: 2.1.2
dependencies:
fail_dep.name: '0.0.5'
fail_dep2.name: '<0.0.5'
- namespace: fail_dep
name: name
version: '0.0.5'
dependencies:
fail_dep2.name: '>0.0.5'
- namespace: fail_dep2
name: name
# Symlink tests
- namespace: symlink
name: symlink
use_symlink: yes
# Caching update tests
- namespace: cache
name: cache
version: 1.0.0
# Dep with beta version
- namespace: dep_with_beta
name: parent
dependencies:
namespace1.name1: '*'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 77,443 |
ansible-galaxy performs a network call even if the dependencies are already satisfied
|
### Summary
In the dependency resolution stage, ansible-galaxy appears to reference https://galaxy.ansible.com/api/v2/collections/xxx/yyy even though it successfully references a collection that exists in collections_paths. I have not tried with other modules, but I'm confident this pattern is consistent.
I've verified it happens in recent versions as Ansible Core 2.12.4 or 2.12.0, but also in older versions as Ansible 2.9.25.
Possible workarounds (for Ansible Core 2.12.x):
- Specify where the dependencies are locally
```yaml
[root@aap21 ~]# cat requirements.yaml
collections:
- source: amazon-aws-3.1.1.tar.gz
type: file
- source: community-aws-3.1.0.tar.gz
type: file
[root@aap21 ~]# ansible-galaxy collection install -r requirements.yaml -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Reading requirement file at '/root/requirements.yaml'
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
- Manually install the dependencies **beforehand**, and then use `--no-deps` to install the final package (for Ansible Core 2.12.x AND Ansible 2.9.25)
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv --no-deps
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws' <==
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully <===
```
### Issue Type
Bug Report
### Component Name
ansible-galaxy
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
```
### Configuration
```console
$ ansible-config dump --only-changed
$
```
### OS / Environment
```console
$ uname -a
Linux aap21 4.18.0-240.1.1.el8_3.x86_64 #1 SMP Fri Oct 16 13:36:46 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/redhat-release
Red Hat Enterprise Linux release 8.3 (Ootpa)
```
### Steps to Reproduce
In an environment without internet access (or in an isolated environment), try to install a module that has dependencies already satisfied.
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno 101] Network is unreachable>
```
### Expected Results
```yaml
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'community.aws:3.1.0' to '/root/.ansible/collections/ansible_collections/community/aws'
community.aws:3.1.0 was installed successfully
'amazon.aws:3.1.1' is already installed, skipping. <=====
```
### Actual Results
```console
[root@aap21 ~]# ansible-galaxy collection install amazon-aws-3.1.1.tar.gz
Starting galaxy collection install process
Process install dependency map
Starting collection install process
Installing 'amazon.aws:3.1.1' to '/root/.ansible/collections/ansible_collections/amazon/aws'
amazon.aws:3.1.1 was installed successfully <====
[root@aap21 ~]# ansible-galaxy collection install community-aws-3.1.0.tar.gz -vvvv
ansible-galaxy [core 2.12.0]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible-galaxy
python version = 3.8.6 (default, Jan 22 2021, 11:41:28) [GCC 8.4.1 20200928 (Red Hat 8.4.1-1)]
jinja version = 2.10.3
libyaml = True
Using /etc/ansible/ansible.cfg as config file
Starting galaxy collection install process
Found installed collection amazon.aws:3.1.1 at '/root/.ansible/collections/ansible_collections/amazon/aws'
Process install dependency map
Initial connection to galaxy_server: https://galaxy.ansible.com
Found API version 'v1, v2' with Galaxy server default (https://galaxy.ansible.com/api/)
Opened /root/.ansible/galaxy_token
Calling Galaxy at https://galaxy.ansible.com/api/v2/collections/amazon/aws/
[WARNING]: Skipping Galaxy server https://galaxy.ansible.com/api/. Got an unexpected error when getting available versions of collection amazon.aws: Unknown error when
attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
ERROR! Unknown error when attempting to call Galaxy at 'https://galaxy.ansible.com/api/v2/collections/amazon/aws/': <urlopen error [Errno -2] Name or service not known>
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/77443
|
https://github.com/ansible/ansible/pull/78678
|
813c25eed1e4832a8ae363455a2f40bb3de33c7f
|
a02e22e902a69aeb465f16bf03f7f5a91b2cb828
| 2022-04-01T05:52:33Z |
python
| 2022-09-19T18:10:36Z |
test/units/galaxy/test_collection_install.py
|
# -*- coding: utf-8 -*-
# Copyright: (c) 2019, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import copy
import json
import os
import pytest
import re
import shutil
import stat
import tarfile
import yaml
from io import BytesIO, StringIO
from unittest.mock import MagicMock, patch
from unittest import mock
import ansible.module_utils.six.moves.urllib.error as urllib_error
from ansible import context
from ansible.cli.galaxy import GalaxyCLI
from ansible.errors import AnsibleError
from ansible.galaxy import collection, api, dependency_resolution
from ansible.galaxy.dependency_resolution.dataclasses import Candidate, Requirement
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.process import get_bin_path
from ansible.utils import context_objects as co
from ansible.utils.display import Display
class RequirementCandidates():
def __init__(self):
self.candidates = []
def func_wrapper(self, func):
def run(*args, **kwargs):
self.candidates = func(*args, **kwargs)
return self.candidates
return run
def call_galaxy_cli(args):
orig = co.GlobalCLIArgs._Singleton__instance
co.GlobalCLIArgs._Singleton__instance = None
try:
GalaxyCLI(args=['ansible-galaxy', 'collection'] + args).run()
finally:
co.GlobalCLIArgs._Singleton__instance = orig
def artifact_json(namespace, name, version, dependencies, server):
json_str = json.dumps({
'artifact': {
'filename': '%s-%s-%s.tar.gz' % (namespace, name, version),
'sha256': '2d76f3b8c4bab1072848107fb3914c345f71a12a1722f25c08f5d3f51f4ab5fd',
'size': 1234,
},
'download_url': '%s/download/%s-%s-%s.tar.gz' % (server, namespace, name, version),
'metadata': {
'namespace': namespace,
'name': name,
'dependencies': dependencies,
},
'version': version
})
return to_text(json_str)
def artifact_versions_json(namespace, name, versions, galaxy_api, available_api_versions=None):
results = []
available_api_versions = available_api_versions or {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
for version in versions:
results.append({
'href': '%s/api/%s/%s/%s/versions/%s/' % (galaxy_api.api_server, api_version, namespace, name, version),
'version': version,
})
if api_version == 'v2':
json_str = json.dumps({
'count': len(versions),
'next': None,
'previous': None,
'results': results
})
if api_version == 'v3':
response = {'meta': {'count': len(versions)},
'data': results,
'links': {'first': None,
'last': None,
'next': None,
'previous': None},
}
json_str = json.dumps(response)
return to_text(json_str)
def error_json(galaxy_api, errors_to_return=None, available_api_versions=None):
errors_to_return = errors_to_return or []
available_api_versions = available_api_versions or {}
response = {}
api_version = 'v2'
if 'v3' in available_api_versions:
api_version = 'v3'
if api_version == 'v2':
assert len(errors_to_return) <= 1
if errors_to_return:
response = errors_to_return[0]
if api_version == 'v3':
response['errors'] = errors_to_return
json_str = json.dumps(response)
return to_text(json_str)
@pytest.fixture(autouse='function')
def reset_cli_args():
co.GlobalCLIArgs._Singleton__instance = None
yield
co.GlobalCLIArgs._Singleton__instance = None
@pytest.fixture()
def collection_artifact(request, tmp_path_factory):
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
namespace = 'ansible_namespace'
collection = 'collection'
skeleton_path = os.path.join(os.path.dirname(os.path.split(__file__)[0]), 'cli', 'test_data', 'collection_skeleton')
collection_path = os.path.join(test_dir, namespace, collection)
call_galaxy_cli(['init', '%s.%s' % (namespace, collection), '-c', '--init-path', test_dir,
'--collection-skeleton', skeleton_path])
dependencies = getattr(request, 'param', {})
galaxy_yml = os.path.join(collection_path, 'galaxy.yml')
with open(galaxy_yml, 'rb+') as galaxy_obj:
existing_yaml = yaml.safe_load(galaxy_obj)
existing_yaml['dependencies'] = dependencies
galaxy_obj.seek(0)
galaxy_obj.write(to_bytes(yaml.safe_dump(existing_yaml)))
galaxy_obj.truncate()
# Create a file with +x in the collection so we can test the permissions
execute_path = os.path.join(collection_path, 'runme.sh')
with open(execute_path, mode='wb') as fd:
fd.write(b"echo hi")
os.chmod(execute_path, os.stat(execute_path).st_mode | stat.S_IEXEC)
call_galaxy_cli(['build', collection_path, '--output-path', test_dir])
collection_tar = os.path.join(test_dir, '%s-%s-0.1.0.tar.gz' % (namespace, collection))
return to_bytes(collection_path), to_bytes(collection_tar)
@pytest.fixture()
def galaxy_server():
context.CLIARGS._store = {'ignore_certs': False}
galaxy_api = api.GalaxyAPI(None, 'test_server', 'https://galaxy.ansible.com')
galaxy_api.get_collection_signatures = MagicMock(return_value=[])
return galaxy_api
def test_concrete_artifact_manager_scm_no_executable(monkeypatch):
url = 'https://github.com/org/repo'
version = 'commitish'
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
mock_get_bin_path = MagicMock(side_effect=[ValueError('Failed to find required executable')])
monkeypatch.setattr(collection.concrete_artifact_manager, 'get_bin_path', mock_get_bin_path)
error = re.escape(
"Could not find git executable to extract the collection from the Git repository `https://github.com/org/repo`"
)
with pytest.raises(AnsibleError, match=error):
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'commitish', False),
('https://github.com/org/repo,commitish', None, False),
('https://github.com/org/repo/,commitish', None, True),
('https://github.com/org/repo#,commitish', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
clone_cmd = (git_executable, 'clone', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'commitish')
@pytest.mark.parametrize(
'url,version,trailing_slash',
[
('https://github.com/org/repo', 'HEAD', False),
('https://github.com/org/repo,HEAD', None, False),
('https://github.com/org/repo/,HEAD', None, True),
('https://github.com/org/repo#,HEAD', None, False),
('https://github.com/org/repo', None, False),
]
)
def test_concrete_artifact_manager_scm_cmd_shallow(url, version, trailing_slash, monkeypatch):
mock_subprocess_check_call = MagicMock()
monkeypatch.setattr(collection.concrete_artifact_manager.subprocess, 'check_call', mock_subprocess_check_call)
mock_mkdtemp = MagicMock(return_value='')
monkeypatch.setattr(collection.concrete_artifact_manager, 'mkdtemp', mock_mkdtemp)
collection.concrete_artifact_manager._extract_collection_from_git(url, version, b'path')
assert mock_subprocess_check_call.call_count == 2
repo = 'https://github.com/org/repo'
if trailing_slash:
repo += '/'
git_executable = get_bin_path('git')
shallow_clone_cmd = (git_executable, 'clone', '--depth=1', repo, '')
assert mock_subprocess_check_call.call_args_list[0].args[0] == shallow_clone_cmd
assert mock_subprocess_check_call.call_args_list[1].args[0] == (git_executable, 'checkout', 'HEAD')
def test_build_requirement_from_path(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == collection_artifact[0]
assert actual.ver == u'0.1.0'
@pytest.mark.parametrize('version', ['1.1.1', '1.1.0', '1.0.0'])
def test_build_requirement_from_path_with_manifest(version, collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': version,
'dependencies': {
'ansible_namespace.collection': '*'
}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == to_text(version)
def test_build_requirement_from_path_invalid_manifest(collection_artifact):
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(b"not json")
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_artifact_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# a collection artifact should always contain a valid version
manifest_path = os.path.join(collection_artifact[0], b'MANIFEST.json')
manifest_value = json.dumps({
'collection_info': {
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {}
}
})
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(manifest_value))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
expected = (
'^Collection metadata file `.*` at `.*` is expected to have a valid SemVer '
'version value but got {empty_unicode_string!r}$'.
format(empty_unicode_string=u'')
)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
def test_build_requirement_from_path_no_version(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
# version may be falsey/arbitrary strings for collections in development
manifest_path = os.path.join(collection_artifact[0], b'galaxy.yml')
metadata = {
'authors': ['Ansible'],
'readme': 'README.md',
'namespace': 'namespace',
'name': 'name',
'version': '',
'dependencies': {},
}
with open(manifest_path, 'wb') as manifest_obj:
manifest_obj.write(to_bytes(yaml.safe_dump(metadata)))
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_dir_path_as_unknown(collection_artifact[0], concrete_artifact_cm)
# While the folder name suggests a different collection, we treat MANIFEST.json as the source of truth.
assert actual.namespace == u'namespace'
assert actual.name == u'name'
assert actual.src == collection_artifact[0]
assert actual.ver == u'*'
def test_build_requirement_from_tar(collection_artifact):
tmp_path = os.path.join(os.path.split(collection_artifact[1])[0], b'temp')
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(tmp_path, validate_certs=False)
actual = Requirement.from_requirement_dict({'name': to_text(collection_artifact[1])}, concrete_artifact_cm)
assert actual.namespace == u'ansible_namespace'
assert actual.name == u'collection'
assert actual.src == to_text(collection_artifact[1])
assert actual.ver == u'0.1.0'
def test_build_requirement_from_tar_fail_not_tar(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
test_file = os.path.join(test_dir, b'fake.tar.gz')
with open(test_file, 'wb') as test_obj:
test_obj.write(b"\x00\x01\x02\x03")
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection artifact at '%s' is not a valid tar file." % to_native(test_file)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(test_file)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'files': [],
'format': 1,
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('FILES.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection at '%s' does not contain the required file MANIFEST.json." % to_native(tar_path)
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_no_files(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = to_bytes(json.dumps(
{
'collection_info': {},
}
))
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
with pytest.raises(KeyError, match='namespace'):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_tar_invalid_manifest(tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
json_data = b"not a json"
tar_path = os.path.join(test_dir, b'ansible-collections.tar.gz')
with tarfile.open(tar_path, 'w:gz') as tfile:
b_io = BytesIO(json_data)
tar_info = tarfile.TarInfo('MANIFEST.json')
tar_info.size = len(json_data)
tar_info.mode = 0o0644
tfile.addfile(tarinfo=tar_info, fileobj=b_io)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
expected = "Collection tar file member MANIFEST.json does not contain a valid json string."
with pytest.raises(AnsibleError, match=expected):
Requirement.from_requirement_dict({'name': to_text(tar_path)}, concrete_artifact_cm)
def test_build_requirement_from_name(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.1.9', '2.1.10']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_version_metadata = MagicMock(
namespace='namespace', name='collection',
version='2.1.10', artifact_sha256='', dependencies={}
)
monkeypatch.setattr(api.GalaxyAPI, 'get_collection_version_metadata', mock_version_metadata)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
collections = ['namespace.collection']
requirements_file = None
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', collections[0]])
requirements = cli._require_one_of_collections_requirements(
collections, requirements_file, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.ver == u'2.1.10'
assert actual.src == galaxy_server
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_with_prerelease(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirment_from_name_with_prerelease_explicit(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '2.0.1-beta.1', '2.0.1']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1-beta.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:2.0.1-beta.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:2.0.1-beta.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1-beta.1'
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1-beta.1')
def test_build_requirement_from_name_second_server(galaxy_server, monkeypatch, tmp_path_factory):
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.0.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
broken_server = copy.copy(galaxy_server)
broken_server.api_server = 'https://broken.com/'
mock_version_list = MagicMock()
mock_version_list.return_value = []
monkeypatch.setattr(broken_server, 'get_collection_versions', mock_version_list)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>1.0.1'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(
requirements, [broken_server, galaxy_server], concrete_artifact_cm, None, True, False, False, False
)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'1.0.3'
assert mock_version_list.call_count == 1
assert mock_version_list.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_build_requirement_from_name_missing(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.return_value = []
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n* namespace.collection:* (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_build_requirement_from_name_401_unauthorized(galaxy_server, monkeypatch, tmp_path_factory):
mock_open = MagicMock()
mock_open.side_effect = api.GalaxyError(urllib_error.HTTPError('https://galaxy.server.com', 401, 'msg', {},
StringIO()), "error")
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_open)
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>1.0.1'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "error (HTTP Code: 401, Message: msg)"
with pytest.raises(api.GalaxyError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server, galaxy_server], concrete_artifact_cm, None, False, False, False, False)
def test_build_requirement_from_name_single_version(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.0', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:==2.0.0'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:==2.0.0'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.0'
assert [c.ver for c in matches.candidates] == [u'2.0.0']
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.0')
def test_build_requirement_from_name_multiple_versions_one_match(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.1', None, None,
{}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:>=2.0.1,<2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:>=2.0.1,<2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.1'
assert [c.ver for c in matches.candidates] == [u'2.0.1']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
assert mock_get_info.call_count == 1
assert mock_get_info.mock_calls[0][1] == ('namespace', 'collection', '2.0.1')
def test_build_requirement_from_name_multiple_version_results(galaxy_server, monkeypatch, tmp_path_factory):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
multi_api_proxy = collection.galaxy_api_proxy.MultiGalaxyAPIProxy([galaxy_server], concrete_artifact_cm)
dep_provider = dependency_resolution.providers.CollectionDependencyProvider(apis=multi_api_proxy, concrete_artifacts_manager=concrete_artifact_cm)
matches = RequirementCandidates()
mock_find_matches = MagicMock(side_effect=matches.func_wrapper(dep_provider.find_matches), autospec=True)
monkeypatch.setattr(dependency_resolution.providers.CollectionDependencyProvider, 'find_matches', mock_find_matches)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['1.0.1', '1.0.2', '1.0.3']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
mock_get_versions.return_value = ['2.0.0', '2.0.1', '2.0.2', '2.0.3', '2.0.4', '2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.2'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.2'], None, artifacts_manager=concrete_artifact_cm
)['collections']
actual = collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)['namespace.collection']
assert actual.namespace == u'namespace'
assert actual.name == u'collection'
assert actual.src == galaxy_server
assert actual.ver == u'2.0.5'
# should be ordered latest to earliest
assert [c.ver for c in matches.candidates] == [u'2.0.5', u'2.0.4', u'2.0.3', u'2.0.1', u'2.0.0']
assert mock_get_versions.call_count == 1
assert mock_get_versions.mock_calls[0][1] == ('namespace', 'collection')
def test_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '2.0.5', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock()
mock_get_versions.return_value = ['2.0.5']
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection:!=2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['namespace.collection:!=2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=2.0.5 (direct request)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_dep_candidate_with_conflict(monkeypatch, tmp_path_factory, galaxy_server):
test_dir = to_bytes(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections Input'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_get_info_return = [
api.CollectionVersionMetadata('parent', 'collection', '2.0.5', None, None, {'namespace.collection': '!=1.0.0'}, None, None),
api.CollectionVersionMetadata('namespace', 'collection', '1.0.0', None, None, {}, None, None),
]
mock_get_info = MagicMock(side_effect=mock_get_info_return)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(side_effect=[['2.0.5'], ['1.0.0']])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'parent.collection:2.0.5'])
requirements = cli._require_one_of_collections_requirements(
['parent.collection:2.0.5'], None, artifacts_manager=concrete_artifact_cm
)['collections']
expected = "Failed to resolve the requested dependencies map. Could not satisfy the following requirements:\n"
expected += "* namespace.collection:!=1.0.0 (dependency of parent.collection:2.0.5)"
with pytest.raises(AnsibleError, match=re.escape(expected)):
collection._resolve_depenency_map(requirements, [galaxy_server], concrete_artifact_cm, None, False, True, False, False)
def test_install_installed_collection(monkeypatch, tmp_path_factory, galaxy_server):
mock_installed_collections = MagicMock(return_value=[Candidate('namespace.collection', '1.2.3', None, 'dir', None)])
monkeypatch.setattr(collection, 'find_existing_collections', mock_installed_collections)
test_dir = to_text(tmp_path_factory.mktemp('test-ÅÑŚÌβŁÈ Collections'))
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(test_dir, validate_certs=False)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
mock_get_info = MagicMock()
mock_get_info.return_value = api.CollectionVersionMetadata('namespace', 'collection', '1.2.3', None, None, {}, None, None)
monkeypatch.setattr(galaxy_server, 'get_collection_version_metadata', mock_get_info)
mock_get_versions = MagicMock(return_value=['1.2.3', '1.3.0'])
monkeypatch.setattr(galaxy_server, 'get_collection_versions', mock_get_versions)
cli = GalaxyCLI(args=['ansible-galaxy', 'collection', 'install', 'namespace.collection'])
cli.run()
expected = "Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`."
assert mock_display.mock_calls[1][1][0] == expected
def test_install_collection(collection_artifact, monkeypatch):
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
collection_tar = collection_artifact[1]
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
output_path = os.path.join(os.path.split(collection_tar)[0])
collection_path = os.path.join(output_path, b'ansible_namespace', b'collection')
os.makedirs(os.path.join(collection_path, b'delete_me')) # Create a folder to verify the install cleans out the dir
candidate = Candidate('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)
collection.install(candidate, to_text(output_path), concrete_artifact_cm)
# Ensure the temp directory is empty, nothing is left behind
assert os.listdir(temp_path) == []
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'plugins')).st_mode) == 0o0755
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'README.md')).st_mode) == 0o0644
assert stat.S_IMODE(os.stat(os.path.join(collection_path, b'runme.sh')).st_mode) == 0o0755
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
def test_install_collection_with_download(galaxy_server, collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
shutil.rmtree(collection_path)
collections_dir = ('%s' % os.path.sep).join(to_text(collection_path).split('%s' % os.path.sep)[:-2])
temp_path = os.path.join(os.path.split(collection_tar)[0], b'temp')
os.makedirs(temp_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
mock_download = MagicMock()
mock_download.return_value = collection_tar
monkeypatch.setattr(concrete_artifact_cm, 'get_galaxy_artifact_path', mock_download)
req = Candidate('ansible_namespace.collection', '0.1.0', 'https://downloadme.com', 'galaxy', None)
collection.install(req, to_text(collections_dir), concrete_artifact_cm)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
assert mock_display.call_count == 2
assert mock_display.mock_calls[0][1][0] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" \
% to_text(collection_path)
assert mock_display.mock_calls[1][1][0] == "ansible_namespace.collection:0.1.0 was installed successfully"
assert mock_download.call_count == 1
assert mock_download.mock_calls[0][1][0].src == 'https://downloadme.com'
assert mock_download.mock_calls[0][1][0].type == 'galaxy'
def test_install_collections_from_tar(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
def test_install_collections_existing_without_force(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
assert os.path.isdir(collection_path)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'README.md', b'docs', b'galaxy.yml', b'playbooks', b'plugins', b'roles', b'runme.sh']
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 1
assert display_msgs[0] == 'Nothing to do. All requested collections are already installed. If you want to reinstall them, consider using `--force`.'
for msg in display_msgs:
assert 'WARNING' not in msg
def test_install_missing_metadata_warning(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
for file in [b'MANIFEST.json', b'galaxy.yml']:
b_path = os.path.join(collection_path, file)
if os.path.isfile(b_path):
os.unlink(b_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert 'WARNING' in display_msgs[0]
# Makes sure we don't get stuck in some recursive loop
@pytest.mark.parametrize('collection_artifact', [
{'ansible_namespace.collection': '>=0.0.1'},
], indirect=True)
def test_install_collection_with_circular_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
mock_display = MagicMock()
monkeypatch.setattr(Display, 'display', mock_display)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
actual_files = os.listdir(collection_path)
actual_files.sort()
assert actual_files == [b'FILES.json', b'MANIFEST.json', b'README.md', b'docs', b'playbooks', b'plugins', b'roles',
b'runme.sh']
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
assert actual_manifest['collection_info']['dependencies'] == {'ansible_namespace.collection': '>=0.0.1'}
# Filter out the progress cursor display calls.
display_msgs = [m[1][0] for m in mock_display.mock_calls if 'newline' not in m[2] and len(m[1]) == 1]
assert len(display_msgs) == 4
assert display_msgs[0] == "Process install dependency map"
assert display_msgs[1] == "Starting collection install process"
assert display_msgs[2] == "Installing 'ansible_namespace.collection:0.1.0' to '%s'" % to_text(collection_path)
assert display_msgs[3] == "ansible_namespace.collection:0.1.0 was installed successfully"
@pytest.mark.parametrize('collection_artifact', [
None,
{},
], indirect=True)
def test_install_collection_with_no_dependency(collection_artifact, monkeypatch):
collection_path, collection_tar = collection_artifact
temp_path = os.path.split(collection_tar)[0]
shutil.rmtree(collection_path)
concrete_artifact_cm = collection.concrete_artifact_manager.ConcreteArtifactsManager(temp_path, validate_certs=False)
requirements = [Requirement('ansible_namespace.collection', '0.1.0', to_text(collection_tar), 'file', None)]
collection.install_collections(requirements, to_text(temp_path), [], False, False, False, False, False, False, concrete_artifact_cm, True)
assert os.path.isdir(collection_path)
with open(os.path.join(collection_path, b'MANIFEST.json'), 'rb') as manifest_obj:
actual_manifest = json.loads(to_text(manifest_obj.read()))
assert not actual_manifest['collection_info']['dependencies']
assert actual_manifest['collection_info']['namespace'] == 'ansible_namespace'
assert actual_manifest['collection_info']['name'] == 'collection'
assert actual_manifest['collection_info']['version'] == '0.1.0'
@pytest.mark.parametrize(
"signatures,required_successful_count,ignore_errors,expected_success",
[
([], 'all', [], True),
(["good_signature"], 'all', [], True),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], 'all', [], False),
# This is expected to succeed because ignored does not increment failed signatures.
# "all" signatures is not a specific number, so all == no (non-ignored) signatures in this case.
([collection.gpg.GpgBadArmor(status='failed')], 'all', ["BADARMOR"], True),
([collection.gpg.GpgBadArmor(status='failed'), "good_signature"], 'all', ["BADARMOR"], True),
([], '+all', [], False),
([collection.gpg.GpgBadArmor(status='failed')], '+all', ["BADARMOR"], False),
([], '1', [], True),
([], '+1', [], False),
(["good_signature"], '2', [], False),
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', [], False),
# This is expected to fail because ignored does not increment successful signatures.
# 2 signatures are required, but only 1 is successful.
(["good_signature", collection.gpg.GpgBadArmor(status='failed')], '2', ["BADARMOR"], False),
(["good_signature", "good_signature"], '2', [], True),
]
)
def test_verify_file_signatures(signatures, required_successful_count, ignore_errors, expected_success):
# type: (List[bool], int, bool, bool) -> None
def gpg_error_generator(results):
for result in results:
if isinstance(result, collection.gpg.GpgBaseError):
yield result
fqcn = 'ns.coll'
manifest_file = 'MANIFEST.json'
keyring = '~/.ansible/pubring.kbx'
with patch.object(collection, 'run_gpg_verify', MagicMock(return_value=("somestdout", 0,))):
with patch.object(collection, 'parse_gpg_errors', MagicMock(return_value=gpg_error_generator(signatures))):
assert collection.verify_file_signatures(
fqcn,
manifest_file,
signatures,
keyring,
required_successful_count,
ignore_errors
) == expected_success
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,822 |
Remove deprecated CALLBACKS_ENABLED.ini.0
|
### Summary
The config option `CALLBACKS_ENABLED.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78822
|
https://github.com/ansible/ansible/pull/78830
|
76b746655a36807fa9198064ca9fe7c6cc00083a
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
| 2022-09-20T17:07:13Z |
python
| 2022-09-21T20:08:53Z |
changelogs/fragments/78821-78822-remove-callback_whitelist.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,822 |
Remove deprecated CALLBACKS_ENABLED.ini.0
|
### Summary
The config option `CALLBACKS_ENABLED.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78822
|
https://github.com/ansible/ansible/pull/78830
|
76b746655a36807fa9198064ca9fe7c6cc00083a
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
| 2022-09-20T17:07:13Z |
python
| 2022-09-21T20:08:53Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,821 |
Remove deprecated CALLBACKS_ENABLED.env.0
|
### Summary
The config option `CALLBACKS_ENABLED.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78821
|
https://github.com/ansible/ansible/pull/78830
|
76b746655a36807fa9198064ca9fe7c6cc00083a
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
| 2022-09-20T17:07:12Z |
python
| 2022-09-21T20:08:53Z |
changelogs/fragments/78821-78822-remove-callback_whitelist.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,821 |
Remove deprecated CALLBACKS_ENABLED.env.0
|
### Summary
The config option `CALLBACKS_ENABLED.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78821
|
https://github.com/ansible/ansible/pull/78830
|
76b746655a36807fa9198064ca9fe7c6cc00083a
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
| 2022-09-20T17:07:12Z |
python
| 2022-09-21T20:08:53Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACK_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_CALLBACKS_ENABLED'
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callback_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'callbacks_enabled'
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,820 |
Remove deprecated ANSIBLE_COW_ACCEPTLIST.ini.0
|
### Summary
The config option `ANSIBLE_COW_ACCEPTLIST.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78820
|
https://github.com/ansible/ansible/pull/78831
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
|
228d25a321f3166cf73d14a929689774ce33fb51
| 2022-09-20T17:07:10Z |
python
| 2022-09-21T20:09:05Z |
changelogs/fragments/78819-78820-remove-deprecated-cow-options.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,820 |
Remove deprecated ANSIBLE_COW_ACCEPTLIST.ini.0
|
### Summary
The config option `ANSIBLE_COW_ACCEPTLIST.ini.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78820
|
https://github.com/ansible/ansible/pull/78831
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
|
228d25a321f3166cf73d14a929689774ce33fb51
| 2022-09-20T17:07:10Z |
python
| 2022-09-21T20:09:05Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,819 |
Remove deprecated ANSIBLE_COW_ACCEPTLIST.env.0
|
### Summary
The config option `ANSIBLE_COW_ACCEPTLIST.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78819
|
https://github.com/ansible/ansible/pull/78831
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
|
228d25a321f3166cf73d14a929689774ce33fb51
| 2022-09-20T17:07:08Z |
python
| 2022-09-21T20:09:05Z |
changelogs/fragments/78819-78820-remove-deprecated-cow-options.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,819 |
Remove deprecated ANSIBLE_COW_ACCEPTLIST.env.0
|
### Summary
The config option `ANSIBLE_COW_ACCEPTLIST.env.0` should be removed from `lib/ansible/config/base.yml`. It was scheduled for removal in 2.15.
### Issue Type
Bug Report
### Component Name
`lib/ansible/config/base.yml`
### Ansible Version
2.15
### Configuration
N/A
### OS / Environment
N/A
### Steps to Reproduce
N/A
### Expected Results
N/A
### Actual Results
N/A
|
https://github.com/ansible/ansible/issues/78819
|
https://github.com/ansible/ansible/pull/78831
|
d514aeb2a1fdf7ac966dbd58445d273fc579106c
|
228d25a321f3166cf73d14a929689774ce33fb51
| 2022-09-20T17:07:08Z |
python
| 2022-09-21T20:09:05Z |
lib/ansible/config/base.yml
|
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
---
ANSIBLE_HOME:
name: The Ansible home path
description:
- The default root path for Ansible config files on the controller.
default: ~/.ansible
env:
- name: ANSIBLE_HOME
ini:
- key: home
section: defaults
type: path
version_added: '2.14'
ANSIBLE_CONNECTION_PATH:
name: Path of ansible-connection script
default: null
description:
- Specify where to look for the ansible-connection script. This location will be checked before searching $PATH.
- If null, ansible will start with the same directory as the ansible script.
type: path
env: [{name: ANSIBLE_CONNECTION_PATH}]
ini:
- {key: ansible_connection_path, section: persistent_connection}
yaml: {key: persistent_connection.ansible_connection_path}
version_added: "2.8"
ANSIBLE_COW_SELECTION:
name: Cowsay filter selection
default: default
description: This allows you to chose a specific cowsay stencil for the banners or use 'random' to cycle through them.
env: [{name: ANSIBLE_COW_SELECTION}]
ini:
- {key: cow_selection, section: defaults}
ANSIBLE_COW_ACCEPTLIST:
name: Cowsay filter acceptance list
default: ['bud-frogs', 'bunny', 'cheese', 'daemon', 'default', 'dragon', 'elephant-in-snake', 'elephant', 'eyes', 'hellokitty', 'kitty', 'luke-koala', 'meow', 'milk', 'moofasa', 'moose', 'ren', 'sheep', 'small', 'stegosaurus', 'stimpy', 'supermilker', 'three-eyes', 'turkey', 'turtle', 'tux', 'udder', 'vader-koala', 'vader', 'www']
description: Accept list of cowsay templates that are 'safe' to use, set to empty list if you want to enable all installed templates.
env:
- name: ANSIBLE_COW_WHITELIST
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'ANSIBLE_COW_ACCEPTLIST'
- name: ANSIBLE_COW_ACCEPTLIST
version_added: '2.11'
ini:
- key: cow_whitelist
section: defaults
deprecated:
why: normalizing names to new standard
version: "2.15"
alternatives: 'cowsay_enabled_stencils'
- key: cowsay_enabled_stencils
section: defaults
version_added: '2.11'
type: list
ANSIBLE_FORCE_COLOR:
name: Force color output
default: False
description: This option forces color mode even when running without a TTY or the "nocolor" setting is True.
env: [{name: ANSIBLE_FORCE_COLOR}]
ini:
- {key: force_color, section: defaults}
type: boolean
yaml: {key: display.force_color}
ANSIBLE_NOCOLOR:
name: Suppress color output
default: False
description: This setting allows suppressing colorizing output, which is used to give a better indication of failure and status information.
env:
- name: ANSIBLE_NOCOLOR
# this is generic convention for CLI programs
- name: NO_COLOR
version_added: '2.11'
ini:
- {key: nocolor, section: defaults}
type: boolean
yaml: {key: display.nocolor}
ANSIBLE_NOCOWS:
name: Suppress cowsay output
default: False
description: If you have cowsay installed but want to avoid the 'cows' (why????), use this.
env: [{name: ANSIBLE_NOCOWS}]
ini:
- {key: nocows, section: defaults}
type: boolean
yaml: {key: display.i_am_no_fun}
ANSIBLE_COW_PATH:
name: Set path to cowsay command
default: null
description: Specify a custom cowsay path or swap in your cowsay implementation of choice
env: [{name: ANSIBLE_COW_PATH}]
ini:
- {key: cowpath, section: defaults}
type: string
yaml: {key: display.cowpath}
ANSIBLE_PIPELINING:
name: Connection pipelining
default: False
description:
- This is a global option, each connection plugin can override either by having more specific options or not supporting pipelining at all.
- Pipelining, if supported by the connection plugin, reduces the number of network operations required to execute a module on the remote server,
by executing many Ansible modules without actual file transfer.
- It can result in a very significant performance improvement when enabled.
- "However this conflicts with privilege escalation (become). For example, when using 'sudo:' operations you must first
disable 'requiretty' in /etc/sudoers on all managed hosts, which is why it is disabled by default."
- This setting will be disabled if ``ANSIBLE_KEEP_REMOTE_FILES`` is enabled.
env:
- name: ANSIBLE_PIPELINING
ini:
- section: defaults
key: pipelining
- section: connection
key: pipelining
type: boolean
ANY_ERRORS_FATAL:
name: Make Task failures fatal
default: False
description: Sets the default value for the any_errors_fatal keyword, if True, Task failures will be considered fatal errors.
env:
- name: ANSIBLE_ANY_ERRORS_FATAL
ini:
- section: defaults
key: any_errors_fatal
type: boolean
yaml: {key: errors.any_task_errors_fatal}
version_added: "2.4"
BECOME_ALLOW_SAME_USER:
name: Allow becoming the same user
default: False
description:
- This setting controls if become is skipped when remote user and become user are the same. I.E root sudo to root.
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_ALLOW_SAME_USER}]
ini:
- {key: become_allow_same_user, section: privilege_escalation}
type: boolean
yaml: {key: privilege_escalation.become_allow_same_user}
BECOME_PASSWORD_FILE:
name: Become password file
default: ~
description:
- 'The password file to use for the become plugin. --become-password-file.'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_BECOME_PASSWORD_FILE}]
ini:
- {key: become_password_file, section: defaults}
type: path
version_added: '2.12'
AGNOSTIC_BECOME_PROMPT:
name: Display an agnostic become prompt
default: True
type: boolean
description: Display an agnostic become prompt instead of displaying a prompt containing the command line supplied become method
env: [{name: ANSIBLE_AGNOSTIC_BECOME_PROMPT}]
ini:
- {key: agnostic_become_prompt, section: privilege_escalation}
yaml: {key: privilege_escalation.agnostic_become_prompt}
version_added: "2.5"
CACHE_PLUGIN:
name: Persistent Cache plugin
default: memory
description: Chooses which cache plugin to use, the default 'memory' is ephemeral.
env: [{name: ANSIBLE_CACHE_PLUGIN}]
ini:
- {key: fact_caching, section: defaults}
yaml: {key: facts.cache.plugin}
CACHE_PLUGIN_CONNECTION:
name: Cache Plugin URI
default: ~
description: Defines connection or path information for the cache plugin
env: [{name: ANSIBLE_CACHE_PLUGIN_CONNECTION}]
ini:
- {key: fact_caching_connection, section: defaults}
yaml: {key: facts.cache.uri}
CACHE_PLUGIN_PREFIX:
name: Cache Plugin table prefix
default: ansible_facts
description: Prefix to use for cache plugin files/tables
env: [{name: ANSIBLE_CACHE_PLUGIN_PREFIX}]
ini:
- {key: fact_caching_prefix, section: defaults}
yaml: {key: facts.cache.prefix}
CACHE_PLUGIN_TIMEOUT:
name: Cache Plugin expiration timeout
default: 86400
description: Expiration timeout for the cache plugin data
env: [{name: ANSIBLE_CACHE_PLUGIN_TIMEOUT}]
ini:
- {key: fact_caching_timeout, section: defaults}
type: integer
yaml: {key: facts.cache.timeout}
COLLECTIONS_SCAN_SYS_PATH:
name: Scan PYTHONPATH for installed collections
description: A boolean to enable or disable scanning the sys.path for installed collections
default: true
type: boolean
env:
- {name: ANSIBLE_COLLECTIONS_SCAN_SYS_PATH}
ini:
- {key: collections_scan_sys_path, section: defaults}
COLLECTIONS_PATHS:
name: ordered list of root paths for loading installed Ansible collections content
description: >
Colon separated paths in which Ansible will search for collections content.
Collections must be in nested *subdirectories*, not directly in these directories.
For example, if ``COLLECTIONS_PATHS`` includes ``'{{ ANSIBLE_HOME ~ "/collections" }}'``,
and you want to add ``my.collection`` to that directory, it must be saved as
``'{{ ANSIBLE_HOME} ~ "/collections/ansible_collections/my/collection" }}'``.
default: '{{ ANSIBLE_HOME ~ "/collections:/usr/share/ansible/collections" }}'
type: pathspec
env:
- name: ANSIBLE_COLLECTIONS_PATHS # TODO: Deprecate this and ini once PATH has been in a few releases.
- name: ANSIBLE_COLLECTIONS_PATH
version_added: '2.10'
ini:
- key: collections_paths
section: defaults
- key: collections_path
section: defaults
version_added: '2.10'
COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH:
name: Defines behavior when loading a collection that does not support the current Ansible version
description:
- When a collection is loaded that does not support the running Ansible version (with the collection metadata key `requires_ansible`).
env: [{name: ANSIBLE_COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH}]
ini: [{key: collections_on_ansible_version_mismatch, section: defaults}]
choices: &basic_error
error: issue a 'fatal' error and stop the play
warning: issue a warning but continue
ignore: just continue silently
default: warning
_COLOR_DEFAULTS: &color
name: placeholder for color settings' defaults
choices: ['black', 'bright gray', 'blue', 'white', 'green', 'bright blue', 'cyan', 'bright green', 'red', 'bright cyan', 'purple', 'bright red', 'yellow', 'bright purple', 'dark gray', 'bright yellow', 'magenta', 'bright magenta', 'normal']
COLOR_CHANGED:
<<: *color
name: Color for 'changed' task status
default: yellow
description: Defines the color to use on 'Changed' task status
env: [{name: ANSIBLE_COLOR_CHANGED}]
ini:
- {key: changed, section: colors}
COLOR_CONSOLE_PROMPT:
<<: *color
name: "Color for ansible-console's prompt task status"
default: white
description: Defines the default color to use for ansible-console
env: [{name: ANSIBLE_COLOR_CONSOLE_PROMPT}]
ini:
- {key: console_prompt, section: colors}
version_added: "2.7"
COLOR_DEBUG:
<<: *color
name: Color for debug statements
default: dark gray
description: Defines the color to use when emitting debug messages
env: [{name: ANSIBLE_COLOR_DEBUG}]
ini:
- {key: debug, section: colors}
COLOR_DEPRECATE:
<<: *color
name: Color for deprecation messages
default: purple
description: Defines the color to use when emitting deprecation messages
env: [{name: ANSIBLE_COLOR_DEPRECATE}]
ini:
- {key: deprecate, section: colors}
COLOR_DIFF_ADD:
<<: *color
name: Color for diff added display
default: green
description: Defines the color to use when showing added lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_ADD}]
ini:
- {key: diff_add, section: colors}
yaml: {key: display.colors.diff.add}
COLOR_DIFF_LINES:
<<: *color
name: Color for diff lines display
default: cyan
description: Defines the color to use when showing diffs
env: [{name: ANSIBLE_COLOR_DIFF_LINES}]
ini:
- {key: diff_lines, section: colors}
COLOR_DIFF_REMOVE:
<<: *color
name: Color for diff removed display
default: red
description: Defines the color to use when showing removed lines in diffs
env: [{name: ANSIBLE_COLOR_DIFF_REMOVE}]
ini:
- {key: diff_remove, section: colors}
COLOR_ERROR:
<<: *color
name: Color for error messages
default: red
description: Defines the color to use when emitting error messages
env: [{name: ANSIBLE_COLOR_ERROR}]
ini:
- {key: error, section: colors}
yaml: {key: colors.error}
COLOR_HIGHLIGHT:
<<: *color
name: Color for highlighting
default: white
description: Defines the color to use for highlighting
env: [{name: ANSIBLE_COLOR_HIGHLIGHT}]
ini:
- {key: highlight, section: colors}
COLOR_OK:
<<: *color
name: Color for 'ok' task status
default: green
description: Defines the color to use when showing 'OK' task status
env: [{name: ANSIBLE_COLOR_OK}]
ini:
- {key: ok, section: colors}
COLOR_SKIP:
<<: *color
name: Color for 'skip' task status
default: cyan
description: Defines the color to use when showing 'Skipped' task status
env: [{name: ANSIBLE_COLOR_SKIP}]
ini:
- {key: skip, section: colors}
COLOR_UNREACHABLE:
<<: *color
name: Color for 'unreachable' host state
default: bright red
description: Defines the color to use on 'Unreachable' status
env: [{name: ANSIBLE_COLOR_UNREACHABLE}]
ini:
- {key: unreachable, section: colors}
COLOR_VERBOSE:
<<: *color
name: Color for verbose messages
default: blue
description: Defines the color to use when emitting verbose messages. i.e those that show with '-v's.
env: [{name: ANSIBLE_COLOR_VERBOSE}]
ini:
- {key: verbose, section: colors}
COLOR_WARN:
<<: *color
name: Color for warning messages
default: bright purple
description: Defines the color to use when emitting warning messages
env: [{name: ANSIBLE_COLOR_WARN}]
ini:
- {key: warn, section: colors}
CONNECTION_PASSWORD_FILE:
name: Connection password file
default: ~
description: 'The password file to use for the connection plugin. --connection-password-file.'
env: [{name: ANSIBLE_CONNECTION_PASSWORD_FILE}]
ini:
- {key: connection_password_file, section: defaults}
type: path
version_added: '2.12'
COVERAGE_REMOTE_OUTPUT:
name: Sets the output directory and filename prefix to generate coverage run info.
description:
- Sets the output directory on the remote host to generate coverage reports to.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_OUTPUT}
vars:
- {name: _ansible_coverage_remote_output}
type: str
version_added: '2.9'
COVERAGE_REMOTE_PATHS:
name: Sets the list of paths to run coverage for.
description:
- A list of paths for files on the Ansible controller to run coverage for when executing on the remote host.
- Only files that match the path glob will have its coverage collected.
- Multiple path globs can be specified and are separated by ``:``.
- Currently only used for remote coverage on PowerShell modules.
- This is for internal use only.
default: '*'
env:
- {name: _ANSIBLE_COVERAGE_REMOTE_PATH_FILTER}
type: str
version_added: '2.9'
ACTION_WARNINGS:
name: Toggle action warnings
default: True
description:
- By default Ansible will issue a warning when received from a task action (module or action plugin)
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_ACTION_WARNINGS}]
ini:
- {key: action_warnings, section: defaults}
type: boolean
version_added: "2.5"
LOCALHOST_WARNING:
name: Warning when using implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when there are no hosts in the
inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_LOCALHOST_WARNING}]
ini:
- {key: localhost_warning, section: defaults}
type: boolean
version_added: "2.6"
INVENTORY_UNPARSED_WARNING:
name: Warning when no inventory files can be parsed, resulting in an implicit inventory with only localhost
default: True
description:
- By default Ansible will issue a warning when no inventory was loaded and notes that
it will use an implicit localhost-only inventory.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_WARNING}]
ini:
- {key: inventory_unparsed_warning, section: inventory}
type: boolean
version_added: "2.14"
DOC_FRAGMENT_PLUGIN_PATH:
name: documentation fragment plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/doc_fragments:/usr/share/ansible/plugins/doc_fragments" }}'
description: Colon separated paths in which Ansible will search for Documentation Fragments Plugins.
env: [{name: ANSIBLE_DOC_FRAGMENT_PLUGINS}]
ini:
- {key: doc_fragment_plugins, section: defaults}
type: pathspec
DEFAULT_ACTION_PLUGIN_PATH:
name: Action plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/action:/usr/share/ansible/plugins/action" }}'
description: Colon separated paths in which Ansible will search for Action Plugins.
env: [{name: ANSIBLE_ACTION_PLUGINS}]
ini:
- {key: action_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.action.path}
DEFAULT_ALLOW_UNSAFE_LOOKUPS:
name: Allow unsafe lookups
default: False
description:
- "When enabled, this option allows lookup plugins (whether used in variables as ``{{lookup('foo')}}`` or as a loop as with_foo)
to return data that is not marked 'unsafe'."
- By default, such data is marked as unsafe to prevent the templating engine from evaluating any jinja2 templating language,
as this could represent a security risk. This option is provided to allow for backward compatibility,
however users should first consider adding allow_unsafe=True to any lookups which may be expected to contain data which may be run
through the templating engine late
env: []
ini:
- {key: allow_unsafe_lookups, section: defaults}
type: boolean
version_added: "2.2.3"
DEFAULT_ASK_PASS:
name: Ask for the login password
default: False
description:
- This controls whether an Ansible playbook should prompt for a login password.
If using SSH keys for authentication, you probably do not need to change this setting.
env: [{name: ANSIBLE_ASK_PASS}]
ini:
- {key: ask_pass, section: defaults}
type: boolean
yaml: {key: defaults.ask_pass}
DEFAULT_ASK_VAULT_PASS:
name: Ask for the vault password(s)
default: False
description:
- This controls whether an Ansible playbook should prompt for a vault password.
env: [{name: ANSIBLE_ASK_VAULT_PASS}]
ini:
- {key: ask_vault_pass, section: defaults}
type: boolean
DEFAULT_BECOME:
name: Enable privilege escalation (become)
default: False
description: Toggles the use of privilege escalation, allowing you to 'become' another user after login.
env: [{name: ANSIBLE_BECOME}]
ini:
- {key: become, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_ASK_PASS:
name: Ask for the privilege escalation (become) password
default: False
description: Toggle to prompt for privilege escalation password.
env: [{name: ANSIBLE_BECOME_ASK_PASS}]
ini:
- {key: become_ask_pass, section: privilege_escalation}
type: boolean
DEFAULT_BECOME_METHOD:
name: Choose privilege escalation method
default: 'sudo'
description: Privilege escalation method to use when `become` is enabled.
env: [{name: ANSIBLE_BECOME_METHOD}]
ini:
- {section: privilege_escalation, key: become_method}
DEFAULT_BECOME_EXE:
name: Choose 'become' executable
default: ~
description: 'executable to use for privilege escalation, otherwise Ansible will depend on PATH'
env: [{name: ANSIBLE_BECOME_EXE}]
ini:
- {key: become_exe, section: privilege_escalation}
DEFAULT_BECOME_FLAGS:
name: Set 'become' executable options
default: ~
description: Flags to pass to the privilege escalation executable.
env: [{name: ANSIBLE_BECOME_FLAGS}]
ini:
- {key: become_flags, section: privilege_escalation}
BECOME_PLUGIN_PATH:
name: Become plugins path
default: '{{ ANSIBLE_HOME ~ "/plugins/become:/usr/share/ansible/plugins/become" }}'
description: Colon separated paths in which Ansible will search for Become Plugins.
env: [{name: ANSIBLE_BECOME_PLUGINS}]
ini:
- {key: become_plugins, section: defaults}
type: pathspec
version_added: "2.8"
DEFAULT_BECOME_USER:
# FIXME: should really be blank and make -u passing optional depending on it
name: Set the user you 'become' via privilege escalation
default: root
description: The user your login/remote user 'becomes' when using privilege escalation, most systems will use 'root' when no user is specified.
env: [{name: ANSIBLE_BECOME_USER}]
ini:
- {key: become_user, section: privilege_escalation}
yaml: {key: become.user}
DEFAULT_CACHE_PLUGIN_PATH:
name: Cache Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cache:/usr/share/ansible/plugins/cache" }}'
description: Colon separated paths in which Ansible will search for Cache Plugins.
env: [{name: ANSIBLE_CACHE_PLUGINS}]
ini:
- {key: cache_plugins, section: defaults}
type: pathspec
DEFAULT_CALLBACK_PLUGIN_PATH:
name: Callback Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/callback:/usr/share/ansible/plugins/callback" }}'
description: Colon separated paths in which Ansible will search for Callback Plugins.
env: [{name: ANSIBLE_CALLBACK_PLUGINS}]
ini:
- {key: callback_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.callback.path}
CALLBACKS_ENABLED:
name: Enable callback plugins that require it.
default: []
description:
- "List of enabled callbacks, not all callbacks need enabling,
but many of those shipped with Ansible do as we don't want them activated by default."
env:
- name: ANSIBLE_CALLBACKS_ENABLED
version_added: '2.11'
ini:
- key: callbacks_enabled
section: defaults
version_added: '2.11'
type: list
DEFAULT_CLICONF_PLUGIN_PATH:
name: Cliconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/cliconf:/usr/share/ansible/plugins/cliconf" }}'
description: Colon separated paths in which Ansible will search for Cliconf Plugins.
env: [{name: ANSIBLE_CLICONF_PLUGINS}]
ini:
- {key: cliconf_plugins, section: defaults}
type: pathspec
DEFAULT_CONNECTION_PLUGIN_PATH:
name: Connection Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/connection:/usr/share/ansible/plugins/connection" }}'
description: Colon separated paths in which Ansible will search for Connection Plugins.
env: [{name: ANSIBLE_CONNECTION_PLUGINS}]
ini:
- {key: connection_plugins, section: defaults}
type: pathspec
yaml: {key: plugins.connection.path}
DEFAULT_DEBUG:
name: Debug mode
default: False
description:
- "Toggles debug output in Ansible. This is *very* verbose and can hinder
multiprocessing. Debug output can also include secret information
despite no_log settings being enabled, which means debug mode should not be used in
production."
env: [{name: ANSIBLE_DEBUG}]
ini:
- {key: debug, section: defaults}
type: boolean
DEFAULT_EXECUTABLE:
name: Target shell executable
default: /bin/sh
description:
- "This indicates the command to use to spawn a shell under for Ansible's execution needs on a target.
Users may need to change this in rare instances when shell usage is constrained, but in most cases it may be left as is."
env: [{name: ANSIBLE_EXECUTABLE}]
ini:
- {key: executable, section: defaults}
DEFAULT_FACT_PATH:
name: local fact path
description:
- "This option allows you to globally configure a custom path for 'local_facts' for the implied :ref:`ansible_collections.ansible.builtin.setup_module` task when using fact gathering."
- "If not set, it will fallback to the default from the ``ansible.builtin.setup`` module: ``/etc/ansible/facts.d``."
- "This does **not** affect user defined tasks that use the ``ansible.builtin.setup`` module."
- The real action being created by the implicit task is currently ``ansible.legacy.gather_facts`` module, which then calls the configured fact modules,
by default this will be ``ansible.builtin.setup`` for POSIX systems but other platforms might have different defaults.
env: [{name: ANSIBLE_FACT_PATH}]
ini:
- {key: fact_path, section: defaults}
type: string
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_FILTER_PLUGIN_PATH:
name: Jinja2 Filter Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/filter:/usr/share/ansible/plugins/filter" }}'
description: Colon separated paths in which Ansible will search for Jinja2 Filter Plugins.
env: [{name: ANSIBLE_FILTER_PLUGINS}]
ini:
- {key: filter_plugins, section: defaults}
type: pathspec
DEFAULT_FORCE_HANDLERS:
name: Force handlers to run after failure
default: False
description:
- This option controls if notified handlers run on a host even if a failure occurs on that host.
- When false, the handlers will not run if a failure has occurred on a host.
- This can also be set per play or on the command line. See Handlers and Failure for more details.
env: [{name: ANSIBLE_FORCE_HANDLERS}]
ini:
- {key: force_handlers, section: defaults}
type: boolean
version_added: "1.9.1"
DEFAULT_FORKS:
name: Number of task forks
default: 5
description: Maximum number of forks Ansible will use to execute tasks on target hosts.
env: [{name: ANSIBLE_FORKS}]
ini:
- {key: forks, section: defaults}
type: integer
DEFAULT_GATHERING:
name: Gathering behaviour
default: 'implicit'
description:
- This setting controls the default policy of fact gathering (facts discovered about remote systems).
- "This option can be useful for those wishing to save fact gathering time. Both 'smart' and 'explicit' will use the cache plugin."
env: [{name: ANSIBLE_GATHERING}]
ini:
- key: gathering
section: defaults
version_added: "1.6"
choices:
implicit: "the cache plugin will be ignored and facts will be gathered per play unless 'gather_facts: False' is set."
explicit: facts will not be gathered unless directly requested in the play.
smart: each new host that has no facts discovered will be scanned, but if the same host is addressed in multiple plays it will not be contacted again in the run.
DEFAULT_GATHER_SUBSET:
name: Gather facts subset
description:
- Set the `gather_subset` option for the :ref:`ansible_collections.ansible.builtin.setup_module` task in the implicit fact gathering.
See the module documentation for specifics.
- "It does **not** apply to user defined ``ansible.builtin.setup`` tasks."
env: [{name: ANSIBLE_GATHER_SUBSET}]
ini:
- key: gather_subset
section: defaults
version_added: "2.1"
type: list
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_GATHER_TIMEOUT:
name: Gather facts timeout
description:
- Set the timeout in seconds for the implicit fact gathering, see the module documentation for specifics.
- "It does **not** apply to user defined :ref:`ansible_collections.ansible.builtin.setup_module` tasks."
env: [{name: ANSIBLE_GATHER_TIMEOUT}]
ini:
- {key: gather_timeout, section: defaults}
type: integer
deprecated:
# TODO: when removing set playbook/play.py to default=None
why: the module_defaults keyword is a more generic version and can apply to all calls to the
M(ansible.builtin.gather_facts) or M(ansible.builtin.setup) actions
version: "2.18"
alternatives: module_defaults
DEFAULT_HASH_BEHAVIOUR:
name: Hash merge behaviour
default: replace
type: string
choices:
replace: Any variable that is defined more than once is overwritten using the order from variable precedence rules (highest wins).
merge: Any dictionary variable will be recursively merged with new definitions across the different variable definition sources.
description:
- This setting controls how duplicate definitions of dictionary variables (aka hash, map, associative array) are handled in Ansible.
- This does not affect variables whose values are scalars (integers, strings) or arrays.
- "**WARNING**, changing this setting is not recommended as this is fragile and makes your content (plays, roles, collections) non portable,
leading to continual confusion and misuse. Don't change this setting unless you think you have an absolute need for it."
- We recommend avoiding reusing variable names and relying on the ``combine`` filter and ``vars`` and ``varnames`` lookups
to create merged versions of the individual variables. In our experience this is rarely really needed and a sign that too much
complexity has been introduced into the data structures and plays.
- For some uses you can also look into custom vars_plugins to merge on input, even substituting the default ``host_group_vars``
that is in charge of parsing the ``host_vars/`` and ``group_vars/`` directories. Most users of this setting are only interested in inventory scope,
but the setting itself affects all sources and makes debugging even harder.
- All playbooks and roles in the official examples repos assume the default for this setting.
- Changing the setting to ``merge`` applies across variable sources, but many sources will internally still overwrite the variables.
For example ``include_vars`` will dedupe variables internally before updating Ansible, with 'last defined' overwriting previous definitions in same file.
- The Ansible project recommends you **avoid ``merge`` for new projects.**
- It is the intention of the Ansible developers to eventually deprecate and remove this setting, but it is being kept as some users do heavily rely on it.
New projects should **avoid 'merge'**.
env: [{name: ANSIBLE_HASH_BEHAVIOUR}]
ini:
- {key: hash_behaviour, section: defaults}
DEFAULT_HOST_LIST:
name: Inventory Source
default: /etc/ansible/hosts
description: Comma separated list of Ansible inventory sources
env:
- name: ANSIBLE_INVENTORY
expand_relative_paths: True
ini:
- key: inventory
section: defaults
type: pathlist
yaml: {key: defaults.inventory}
DEFAULT_HTTPAPI_PLUGIN_PATH:
name: HttpApi Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/httpapi:/usr/share/ansible/plugins/httpapi" }}'
description: Colon separated paths in which Ansible will search for HttpApi Plugins.
env: [{name: ANSIBLE_HTTPAPI_PLUGINS}]
ini:
- {key: httpapi_plugins, section: defaults}
type: pathspec
DEFAULT_INTERNAL_POLL_INTERVAL:
name: Internal poll interval
default: 0.001
env: []
ini:
- {key: internal_poll_interval, section: defaults}
type: float
version_added: "2.2"
description:
- This sets the interval (in seconds) of Ansible internal processes polling each other.
Lower values improve performance with large playbooks at the expense of extra CPU load.
Higher values are more suitable for Ansible usage in automation scenarios,
when UI responsiveness is not required but CPU usage might be a concern.
- "The default corresponds to the value hardcoded in Ansible <= 2.1"
DEFAULT_INVENTORY_PLUGIN_PATH:
name: Inventory Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/inventory:/usr/share/ansible/plugins/inventory" }}'
description: Colon separated paths in which Ansible will search for Inventory Plugins.
env: [{name: ANSIBLE_INVENTORY_PLUGINS}]
ini:
- {key: inventory_plugins, section: defaults}
type: pathspec
DEFAULT_JINJA2_EXTENSIONS:
name: Enabled Jinja2 extensions
default: []
description:
- This is a developer-specific feature that allows enabling additional Jinja2 extensions.
- "See the Jinja2 documentation for details. If you do not know what these do, you probably don't need to change this setting :)"
env: [{name: ANSIBLE_JINJA2_EXTENSIONS}]
ini:
- {key: jinja2_extensions, section: defaults}
DEFAULT_JINJA2_NATIVE:
name: Use Jinja2's NativeEnvironment for templating
default: False
description: This option preserves variable types during template operations.
env: [{name: ANSIBLE_JINJA2_NATIVE}]
ini:
- {key: jinja2_native, section: defaults}
type: boolean
yaml: {key: jinja2_native}
version_added: 2.7
DEFAULT_KEEP_REMOTE_FILES:
name: Keep remote files
default: False
description:
- Enables/disables the cleaning up of the temporary files Ansible used to execute the tasks on the remote.
- If this option is enabled it will disable ``ANSIBLE_PIPELINING``.
env: [{name: ANSIBLE_KEEP_REMOTE_FILES}]
ini:
- {key: keep_remote_files, section: defaults}
type: boolean
DEFAULT_LIBVIRT_LXC_NOSECLABEL:
# TODO: move to plugin
name: No security label on Lxc
default: False
description:
- "This setting causes libvirt to connect to lxc containers by passing --noseclabel to virsh.
This is necessary when running on systems which do not have SELinux."
env:
- name: ANSIBLE_LIBVIRT_LXC_NOSECLABEL
ini:
- {key: libvirt_lxc_noseclabel, section: selinux}
type: boolean
version_added: "2.1"
DEFAULT_LOAD_CALLBACK_PLUGINS:
name: Load callbacks for adhoc
default: False
description:
- Controls whether callback plugins are loaded when running /usr/bin/ansible.
This may be used to log activity from the command line, send notifications, and so on.
Callback plugins are always loaded for ``ansible-playbook``.
env: [{name: ANSIBLE_LOAD_CALLBACK_PLUGINS}]
ini:
- {key: bin_ansible_callbacks, section: defaults}
type: boolean
version_added: "1.8"
DEFAULT_LOCAL_TMP:
name: Controller temporary directory
default: '{{ ANSIBLE_HOME ~ "/tmp" }}'
description: Temporary directory for Ansible to use on the controller.
env: [{name: ANSIBLE_LOCAL_TEMP}]
ini:
- {key: local_tmp, section: defaults}
type: tmppath
DEFAULT_LOG_PATH:
name: Ansible log file path
default: ~
description: File to which Ansible will log on the controller. When empty logging is disabled.
env: [{name: ANSIBLE_LOG_PATH}]
ini:
- {key: log_path, section: defaults}
type: path
DEFAULT_LOG_FILTER:
name: Name filters for python logger
default: []
description: List of logger names to filter out of the log file
env: [{name: ANSIBLE_LOG_FILTER}]
ini:
- {key: log_filter, section: defaults}
type: list
DEFAULT_LOOKUP_PLUGIN_PATH:
name: Lookup Plugins Path
description: Colon separated paths in which Ansible will search for Lookup Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/lookup:/usr/share/ansible/plugins/lookup" }}'
env: [{name: ANSIBLE_LOOKUP_PLUGINS}]
ini:
- {key: lookup_plugins, section: defaults}
type: pathspec
yaml: {key: defaults.lookup_plugins}
DEFAULT_MANAGED_STR:
name: Ansible managed
default: 'Ansible managed'
description: Sets the macro for the 'ansible_managed' variable available for :ref:`ansible_collections.ansible.builtin.template_module` and :ref:`ansible_collections.ansible.windows.win_template_module`. This is only relevant for those two modules.
env: []
ini:
- {key: ansible_managed, section: defaults}
yaml: {key: defaults.ansible_managed}
DEFAULT_MODULE_ARGS:
name: Adhoc default arguments
default: ~
description:
- This sets the default arguments to pass to the ``ansible`` adhoc binary if no ``-a`` is specified.
env: [{name: ANSIBLE_MODULE_ARGS}]
ini:
- {key: module_args, section: defaults}
DEFAULT_MODULE_COMPRESSION:
name: Python module compression
default: ZIP_DEFLATED
description: Compression scheme to use when transferring Python modules to the target.
env: []
ini:
- {key: module_compression, section: defaults}
# vars:
# - name: ansible_module_compression
DEFAULT_MODULE_NAME:
name: Default adhoc module
default: command
description: "Module to use with the ``ansible`` AdHoc command, if none is specified via ``-m``."
env: []
ini:
- {key: module_name, section: defaults}
DEFAULT_MODULE_PATH:
name: Modules Path
description: Colon separated paths in which Ansible will search for Modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/modules:/usr/share/ansible/plugins/modules" }}'
env: [{name: ANSIBLE_LIBRARY}]
ini:
- {key: library, section: defaults}
type: pathspec
DEFAULT_MODULE_UTILS_PATH:
name: Module Utils Path
description: Colon separated paths in which Ansible will search for Module utils files, which are shared by modules.
default: '{{ ANSIBLE_HOME ~ "/plugins/module_utils:/usr/share/ansible/plugins/module_utils" }}'
env: [{name: ANSIBLE_MODULE_UTILS}]
ini:
- {key: module_utils, section: defaults}
type: pathspec
DEFAULT_NETCONF_PLUGIN_PATH:
name: Netconf Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/netconf:/usr/share/ansible/plugins/netconf" }}'
description: Colon separated paths in which Ansible will search for Netconf Plugins.
env: [{name: ANSIBLE_NETCONF_PLUGINS}]
ini:
- {key: netconf_plugins, section: defaults}
type: pathspec
DEFAULT_NO_LOG:
name: No log
default: False
description: "Toggle Ansible's display and logging of task details, mainly used to avoid security disclosures."
env: [{name: ANSIBLE_NO_LOG}]
ini:
- {key: no_log, section: defaults}
type: boolean
DEFAULT_NO_TARGET_SYSLOG:
name: No syslog on target
default: False
description:
- Toggle Ansible logging to syslog on the target when it executes tasks. On Windows hosts this will disable a newer
style PowerShell modules from writting to the event log.
env: [{name: ANSIBLE_NO_TARGET_SYSLOG}]
ini:
- {key: no_target_syslog, section: defaults}
vars:
- name: ansible_no_target_syslog
version_added: '2.10'
type: boolean
yaml: {key: defaults.no_target_syslog}
DEFAULT_NULL_REPRESENTATION:
name: Represent a null
default: ~
description: What templating should return as a 'null' value. When not set it will let Jinja2 decide.
env: [{name: ANSIBLE_NULL_REPRESENTATION}]
ini:
- {key: null_representation, section: defaults}
type: raw
DEFAULT_POLL_INTERVAL:
name: Async poll interval
default: 15
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how often to check back on the status of those tasks when an explicit poll interval is not supplied.
The default is a reasonably moderate 15 seconds which is a tradeoff between checking in frequently and
providing a quick turnaround when something may have completed.
env: [{name: ANSIBLE_POLL_INTERVAL}]
ini:
- {key: poll_interval, section: defaults}
type: integer
DEFAULT_PRIVATE_KEY_FILE:
name: Private key file
default: ~
description:
- Option for connections using a certificate or key file to authenticate, rather than an agent or passwords,
you can set the default value here to avoid re-specifying --private-key with every invocation.
env: [{name: ANSIBLE_PRIVATE_KEY_FILE}]
ini:
- {key: private_key_file, section: defaults}
type: path
DEFAULT_PRIVATE_ROLE_VARS:
name: Private role variables
default: False
description:
- Makes role variables inaccessible from other roles.
- This was introduced as a way to reset role variables to default values if
a role is used more than once in a playbook.
env: [{name: ANSIBLE_PRIVATE_ROLE_VARS}]
ini:
- {key: private_role_vars, section: defaults}
type: boolean
yaml: {key: defaults.private_role_vars}
DEFAULT_REMOTE_PORT:
name: Remote port
default: ~
description: Port to use in remote connections, when blank it will use the connection plugin default.
env: [{name: ANSIBLE_REMOTE_PORT}]
ini:
- {key: remote_port, section: defaults}
type: integer
yaml: {key: defaults.remote_port}
DEFAULT_REMOTE_USER:
name: Login/Remote User
description:
- Sets the login user for the target machines
- "When blank it uses the connection plugin's default, normally the user currently executing Ansible."
env: [{name: ANSIBLE_REMOTE_USER}]
ini:
- {key: remote_user, section: defaults}
DEFAULT_ROLES_PATH:
name: Roles path
default: '{{ ANSIBLE_HOME ~ "/roles:/usr/share/ansible/roles:/etc/ansible/roles" }}'
description: Colon separated paths in which Ansible will search for Roles.
env: [{name: ANSIBLE_ROLES_PATH}]
expand_relative_paths: True
ini:
- {key: roles_path, section: defaults}
type: pathspec
yaml: {key: defaults.roles_path}
DEFAULT_SELINUX_SPECIAL_FS:
name: Problematic file systems
default: fuse, nfs, vboxsf, ramfs, 9p, vfat
description:
- "Some filesystems do not support safe operations and/or return inconsistent errors,
this setting makes Ansible 'tolerate' those in the list w/o causing fatal errors."
- Data corruption may occur and writes are not always verified when a filesystem is in the list.
env:
- name: ANSIBLE_SELINUX_SPECIAL_FS
version_added: "2.9"
ini:
- {key: special_context_filesystems, section: selinux}
type: list
DEFAULT_STDOUT_CALLBACK:
name: Main display callback plugin
default: default
description:
- "Set the main callback used to display Ansible output. You can only have one at a time."
- You can have many other callbacks, but just one can be in charge of stdout.
- See :ref:`callback_plugins` for a list of available options.
env: [{name: ANSIBLE_STDOUT_CALLBACK}]
ini:
- {key: stdout_callback, section: defaults}
ENABLE_TASK_DEBUGGER:
name: Whether to enable the task debugger
default: False
description:
- Whether or not to enable the task debugger, this previously was done as a strategy plugin.
- Now all strategy plugins can inherit this behavior. The debugger defaults to activating when
- a task is failed on unreachable. Use the debugger keyword for more flexibility.
type: boolean
env: [{name: ANSIBLE_ENABLE_TASK_DEBUGGER}]
ini:
- {key: enable_task_debugger, section: defaults}
version_added: "2.5"
TASK_DEBUGGER_IGNORE_ERRORS:
name: Whether a failed task with ignore_errors=True will still invoke the debugger
default: True
description:
- This option defines whether the task debugger will be invoked on a failed task when ignore_errors=True
is specified.
- True specifies that the debugger will honor ignore_errors, False will not honor ignore_errors.
type: boolean
env: [{name: ANSIBLE_TASK_DEBUGGER_IGNORE_ERRORS}]
ini:
- {key: task_debugger_ignore_errors, section: defaults}
version_added: "2.7"
DEFAULT_STRATEGY:
name: Implied strategy
default: 'linear'
description: Set the default strategy used for plays.
env: [{name: ANSIBLE_STRATEGY}]
ini:
- {key: strategy, section: defaults}
version_added: "2.3"
DEFAULT_STRATEGY_PLUGIN_PATH:
name: Strategy Plugins Path
description: Colon separated paths in which Ansible will search for Strategy Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/strategy:/usr/share/ansible/plugins/strategy" }}'
env: [{name: ANSIBLE_STRATEGY_PLUGINS}]
ini:
- {key: strategy_plugins, section: defaults}
type: pathspec
DEFAULT_SU:
default: False
description: 'Toggle the use of "su" for tasks.'
env: [{name: ANSIBLE_SU}]
ini:
- {key: su, section: defaults}
type: boolean
yaml: {key: defaults.su}
DEFAULT_SYSLOG_FACILITY:
name: syslog facility
default: LOG_USER
description: Syslog facility to use when Ansible logs to the remote target
env: [{name: ANSIBLE_SYSLOG_FACILITY}]
ini:
- {key: syslog_facility, section: defaults}
DEFAULT_TERMINAL_PLUGIN_PATH:
name: Terminal Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/terminal:/usr/share/ansible/plugins/terminal" }}'
description: Colon separated paths in which Ansible will search for Terminal Plugins.
env: [{name: ANSIBLE_TERMINAL_PLUGINS}]
ini:
- {key: terminal_plugins, section: defaults}
type: pathspec
DEFAULT_TEST_PLUGIN_PATH:
name: Jinja2 Test Plugins Path
description: Colon separated paths in which Ansible will search for Jinja2 Test Plugins.
default: '{{ ANSIBLE_HOME ~ "/plugins/test:/usr/share/ansible/plugins/test" }}'
env: [{name: ANSIBLE_TEST_PLUGINS}]
ini:
- {key: test_plugins, section: defaults}
type: pathspec
DEFAULT_TIMEOUT:
name: Connection timeout
default: 10
description: This is the default timeout for connection plugins to use.
env: [{name: ANSIBLE_TIMEOUT}]
ini:
- {key: timeout, section: defaults}
type: integer
DEFAULT_TRANSPORT:
# note that ssh_utils refs this and needs to be updated if removed
name: Connection plugin
default: smart
description: "Default connection plugin to use, the 'smart' option will toggle between 'ssh' and 'paramiko' depending on controller OS and ssh versions"
env: [{name: ANSIBLE_TRANSPORT}]
ini:
- {key: transport, section: defaults}
DEFAULT_UNDEFINED_VAR_BEHAVIOR:
name: Jinja2 fail on undefined
default: True
version_added: "1.3"
description:
- When True, this causes ansible templating to fail steps that reference variable names that are likely typoed.
- "Otherwise, any '{{ template_expression }}' that contains undefined variables will be rendered in a template or ansible action line exactly as written."
env: [{name: ANSIBLE_ERROR_ON_UNDEFINED_VARS}]
ini:
- {key: error_on_undefined_vars, section: defaults}
type: boolean
DEFAULT_VARS_PLUGIN_PATH:
name: Vars Plugins Path
default: '{{ ANSIBLE_HOME ~ "/plugins/vars:/usr/share/ansible/plugins/vars" }}'
description: Colon separated paths in which Ansible will search for Vars Plugins.
env: [{name: ANSIBLE_VARS_PLUGINS}]
ini:
- {key: vars_plugins, section: defaults}
type: pathspec
# TODO: unused?
#DEFAULT_VAR_COMPRESSION_LEVEL:
# default: 0
# description: 'TODO: write it'
# env: [{name: ANSIBLE_VAR_COMPRESSION_LEVEL}]
# ini:
# - {key: var_compression_level, section: defaults}
# type: integer
# yaml: {key: defaults.var_compression_level}
DEFAULT_VAULT_ID_MATCH:
name: Force vault id match
default: False
description: 'If true, decrypting vaults with a vault id will only try the password from the matching vault-id'
env: [{name: ANSIBLE_VAULT_ID_MATCH}]
ini:
- {key: vault_id_match, section: defaults}
yaml: {key: defaults.vault_id_match}
DEFAULT_VAULT_IDENTITY:
name: Vault id label
default: default
description: 'The label to use for the default vault id label in cases where a vault id label is not provided'
env: [{name: ANSIBLE_VAULT_IDENTITY}]
ini:
- {key: vault_identity, section: defaults}
yaml: {key: defaults.vault_identity}
DEFAULT_VAULT_ENCRYPT_IDENTITY:
name: Vault id to use for encryption
description: 'The vault_id to use for encrypting by default. If multiple vault_ids are provided, this specifies which to use for encryption. The --encrypt-vault-id cli option overrides the configured value.'
env: [{name: ANSIBLE_VAULT_ENCRYPT_IDENTITY}]
ini:
- {key: vault_encrypt_identity, section: defaults}
yaml: {key: defaults.vault_encrypt_identity}
DEFAULT_VAULT_IDENTITY_LIST:
name: Default vault ids
default: []
description: 'A list of vault-ids to use by default. Equivalent to multiple --vault-id args. Vault-ids are tried in order.'
env: [{name: ANSIBLE_VAULT_IDENTITY_LIST}]
ini:
- {key: vault_identity_list, section: defaults}
type: list
yaml: {key: defaults.vault_identity_list}
DEFAULT_VAULT_PASSWORD_FILE:
name: Vault password file
default: ~
description:
- 'The vault password file to use. Equivalent to --vault-password-file or --vault-id'
- If executable, it will be run and the resulting stdout will be used as the password.
env: [{name: ANSIBLE_VAULT_PASSWORD_FILE}]
ini:
- {key: vault_password_file, section: defaults}
type: path
yaml: {key: defaults.vault_password_file}
DEFAULT_VERBOSITY:
name: Verbosity
default: 0
description: Sets the default verbosity, equivalent to the number of ``-v`` passed in the command line.
env: [{name: ANSIBLE_VERBOSITY}]
ini:
- {key: verbosity, section: defaults}
type: integer
DEPRECATION_WARNINGS:
name: Deprecation messages
default: True
description: "Toggle to control the showing of deprecation warnings"
env: [{name: ANSIBLE_DEPRECATION_WARNINGS}]
ini:
- {key: deprecation_warnings, section: defaults}
type: boolean
DEVEL_WARNING:
name: Running devel warning
default: True
description: Toggle to control showing warnings related to running devel
env: [{name: ANSIBLE_DEVEL_WARNING}]
ini:
- {key: devel_warning, section: defaults}
type: boolean
DIFF_ALWAYS:
name: Show differences
default: False
description: Configuration toggle to tell modules to show differences when in 'changed' status, equivalent to ``--diff``.
env: [{name: ANSIBLE_DIFF_ALWAYS}]
ini:
- {key: always, section: diff}
type: bool
DIFF_CONTEXT:
name: Difference context
default: 3
description: How many lines of context to show when displaying the differences between files.
env: [{name: ANSIBLE_DIFF_CONTEXT}]
ini:
- {key: context, section: diff}
type: integer
DISPLAY_ARGS_TO_STDOUT:
name: Show task arguments
default: False
description:
- "Normally ``ansible-playbook`` will print a header for each task that is run.
These headers will contain the name: field from the task if you specified one.
If you didn't then ``ansible-playbook`` uses the task's action to help you tell which task is presently running.
Sometimes you run many of the same action and so you want more information about the task to differentiate it from others of the same action.
If you set this variable to True in the config then ``ansible-playbook`` will also include the task's arguments in the header."
- "This setting defaults to False because there is a chance that you have sensitive values in your parameters and
you do not want those to be printed."
- "If you set this to True you should be sure that you have secured your environment's stdout
(no one can shoulder surf your screen and you aren't saving stdout to an insecure file) or
made sure that all of your playbooks explicitly added the ``no_log: True`` parameter to tasks which have sensitive values
See How do I keep secret data in my playbook? for more information."
env: [{name: ANSIBLE_DISPLAY_ARGS_TO_STDOUT}]
ini:
- {key: display_args_to_stdout, section: defaults}
type: boolean
version_added: "2.1"
DISPLAY_SKIPPED_HOSTS:
name: Show skipped results
default: True
description: "Toggle to control displaying skipped task/host entries in a task in the default callback"
env:
- name: ANSIBLE_DISPLAY_SKIPPED_HOSTS
ini:
- {key: display_skipped_hosts, section: defaults}
type: boolean
DOCSITE_ROOT_URL:
name: Root docsite URL
default: https://docs.ansible.com/ansible-core/
description: Root docsite URL used to generate docs URLs in warning/error text;
must be an absolute URL with valid scheme and trailing slash.
ini:
- {key: docsite_root_url, section: defaults}
version_added: "2.8"
DUPLICATE_YAML_DICT_KEY:
name: Controls ansible behaviour when finding duplicate keys in YAML.
default: warn
description:
- By default Ansible will issue a warning when a duplicate dict key is encountered in YAML.
- These warnings can be silenced by adjusting this setting to False.
env: [{name: ANSIBLE_DUPLICATE_YAML_DICT_KEY}]
ini:
- {key: duplicate_dict_key, section: defaults}
type: string
choices: &basic_error2
error: issue a 'fatal' error and stop the play
warn: issue a warning but continue
ignore: just continue silently
version_added: "2.9"
ERROR_ON_MISSING_HANDLER:
name: Missing handler error
default: True
description: "Toggle to allow missing handlers to become a warning instead of an error when notifying."
env: [{name: ANSIBLE_ERROR_ON_MISSING_HANDLER}]
ini:
- {key: error_on_missing_handler, section: defaults}
type: boolean
CONNECTION_FACTS_MODULES:
name: Map of connections to fact modules
default:
# use ansible.legacy names on unqualified facts modules to allow library/ overrides
asa: ansible.legacy.asa_facts
cisco.asa.asa: cisco.asa.asa_facts
eos: ansible.legacy.eos_facts
arista.eos.eos: arista.eos.eos_facts
frr: ansible.legacy.frr_facts
frr.frr.frr: frr.frr.frr_facts
ios: ansible.legacy.ios_facts
cisco.ios.ios: cisco.ios.ios_facts
iosxr: ansible.legacy.iosxr_facts
cisco.iosxr.iosxr: cisco.iosxr.iosxr_facts
junos: ansible.legacy.junos_facts
junipernetworks.junos.junos: junipernetworks.junos.junos_facts
nxos: ansible.legacy.nxos_facts
cisco.nxos.nxos: cisco.nxos.nxos_facts
vyos: ansible.legacy.vyos_facts
vyos.vyos.vyos: vyos.vyos.vyos_facts
exos: ansible.legacy.exos_facts
extreme.exos.exos: extreme.exos.exos_facts
slxos: ansible.legacy.slxos_facts
extreme.slxos.slxos: extreme.slxos.slxos_facts
voss: ansible.legacy.voss_facts
extreme.voss.voss: extreme.voss.voss_facts
ironware: ansible.legacy.ironware_facts
community.network.ironware: community.network.ironware_facts
description: "Which modules to run during a play's fact gathering stage based on connection"
type: dict
FACTS_MODULES:
name: Gather Facts Modules
default:
- smart
description:
- "Which modules to run during a play's fact gathering stage, using the default of 'smart' will try to figure it out based on connection type."
- "If adding your own modules but you still want to use the default Ansible facts, you will want to include 'setup'
or corresponding network module to the list (if you add 'smart', Ansible will also figure it out)."
- "This does not affect explicit calls to the 'setup' module, but does always affect the 'gather_facts' action (implicit or explicit)."
env: [{name: ANSIBLE_FACTS_MODULES}]
ini:
- {key: facts_modules, section: defaults}
type: list
vars:
- name: ansible_facts_modules
GALAXY_IGNORE_CERTS:
name: Galaxy validate certs
description:
- If set to yes, ansible-galaxy will not validate TLS certificates.
This can be useful for testing against a server with a self-signed certificate.
env: [{name: ANSIBLE_GALAXY_IGNORE}]
ini:
- {key: ignore_certs, section: galaxy}
type: boolean
GALAXY_ROLE_SKELETON:
name: Galaxy role skeleton directory
description: Role skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy``/``ansible-galaxy role``, same as ``--role-skeleton``.
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON}]
ini:
- {key: role_skeleton, section: galaxy}
type: path
GALAXY_ROLE_SKELETON_IGNORE:
name: Galaxy role skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy role or collection skeleton directory
env: [{name: ANSIBLE_GALAXY_ROLE_SKELETON_IGNORE}]
ini:
- {key: role_skeleton_ignore, section: galaxy}
type: list
GALAXY_COLLECTION_SKELETON:
name: Galaxy collection skeleton directory
description: Collection skeleton directory to use as a template for the ``init`` action in ``ansible-galaxy collection``, same as ``--collection-skeleton``.
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON}]
ini:
- {key: collection_skeleton, section: galaxy}
type: path
GALAXY_COLLECTION_SKELETON_IGNORE:
name: Galaxy collection skeleton ignore
default: ["^.git$", "^.*/.git_keep$"]
description: patterns of files to ignore inside a Galaxy collection skeleton directory
env: [{name: ANSIBLE_GALAXY_COLLECTION_SKELETON_IGNORE}]
ini:
- {key: collection_skeleton_ignore, section: galaxy}
type: list
# TODO: unused?
#GALAXY_SCMS:
# name: Galaxy SCMS
# default: git, hg
# description: Available galaxy source control management systems.
# env: [{name: ANSIBLE_GALAXY_SCMS}]
# ini:
# - {key: scms, section: galaxy}
# type: list
GALAXY_SERVER:
default: https://galaxy.ansible.com
description: "URL to prepend when roles don't specify the full URI, assume they are referencing this server as the source."
env: [{name: ANSIBLE_GALAXY_SERVER}]
ini:
- {key: server, section: galaxy}
yaml: {key: galaxy.server}
GALAXY_SERVER_LIST:
description:
- A list of Galaxy servers to use when installing a collection.
- The value corresponds to the config ini header ``[galaxy_server.{{item}}]`` which defines the server details.
- 'See :ref:`galaxy_server_config` for more details on how to define a Galaxy server.'
- The order of servers in this list is used to as the order in which a collection is resolved.
- Setting this config option will ignore the :ref:`galaxy_server` config option.
env: [{name: ANSIBLE_GALAXY_SERVER_LIST}]
ini:
- {key: server_list, section: galaxy}
type: list
version_added: "2.9"
GALAXY_TOKEN_PATH:
default: '{{ ANSIBLE_HOME ~ "/galaxy_token" }}'
description: "Local path to galaxy access token file"
env: [{name: ANSIBLE_GALAXY_TOKEN_PATH}]
ini:
- {key: token_path, section: galaxy}
type: path
version_added: "2.9"
GALAXY_DISPLAY_PROGRESS:
default: ~
description:
- Some steps in ``ansible-galaxy`` display a progress wheel which can cause issues on certain displays or when
outputing the stdout to a file.
- This config option controls whether the display wheel is shown or not.
- The default is to show the display wheel if stdout has a tty.
env: [{name: ANSIBLE_GALAXY_DISPLAY_PROGRESS}]
ini:
- {key: display_progress, section: galaxy}
type: bool
version_added: "2.10"
GALAXY_CACHE_DIR:
default: '{{ ANSIBLE_HOME ~ "/galaxy_cache" }}'
description:
- The directory that stores cached responses from a Galaxy server.
- This is only used by the ``ansible-galaxy collection install`` and ``download`` commands.
- Cache files inside this dir will be ignored if they are world writable.
env:
- name: ANSIBLE_GALAXY_CACHE_DIR
ini:
- section: galaxy
key: cache_dir
type: path
version_added: '2.11'
GALAXY_DISABLE_GPG_VERIFY:
default: false
type: bool
env:
- name: ANSIBLE_GALAXY_DISABLE_GPG_VERIFY
ini:
- section: galaxy
key: disable_gpg_verify
description:
- Disable GPG signature verification during collection installation.
version_added: '2.13'
GALAXY_GPG_KEYRING:
type: path
env:
- name: ANSIBLE_GALAXY_GPG_KEYRING
ini:
- section: galaxy
key: gpg_keyring
description:
- Configure the keyring used for GPG signature verification during collection installation and verification.
version_added: '2.13'
GALAXY_IGNORE_INVALID_SIGNATURE_STATUS_CODES:
type: list
env:
- name: ANSIBLE_GALAXY_IGNORE_SIGNATURE_STATUS_CODES
ini:
- section: galaxy
key: ignore_signature_status_codes
description:
- A list of GPG status codes to ignore during GPG signature verfication.
See L(https://github.com/gpg/gnupg/blob/master/doc/DETAILS#general-status-codes) for status code descriptions.
- If fewer signatures successfully verify the collection than `GALAXY_REQUIRED_VALID_SIGNATURE_COUNT`,
signature verification will fail even if all error codes are ignored.
choices:
- EXPSIG
- EXPKEYSIG
- REVKEYSIG
- BADSIG
- ERRSIG
- NO_PUBKEY
- MISSING_PASSPHRASE
- BAD_PASSPHRASE
- NODATA
- UNEXPECTED
- ERROR
- FAILURE
- BADARMOR
- KEYEXPIRED
- KEYREVOKED
- NO_SECKEY
GALAXY_REQUIRED_VALID_SIGNATURE_COUNT:
type: str
default: 1
env:
- name: ANSIBLE_GALAXY_REQUIRED_VALID_SIGNATURE_COUNT
ini:
- section: galaxy
key: required_valid_signature_count
description:
- The number of signatures that must be successful during GPG signature verification while installing or verifying collections.
- This should be a positive integer or all to indicate all signatures must successfully validate the collection.
- Prepend + to the value to fail if no valid signatures are found for the collection.
HOST_KEY_CHECKING:
# note: constant not in use by ssh plugin anymore
# TODO: check non ssh connection plugins for use/migration
name: Check host keys
default: True
description: 'Set this to "False" if you want to avoid host key checking by the underlying tools Ansible uses to connect to the host'
env: [{name: ANSIBLE_HOST_KEY_CHECKING}]
ini:
- {key: host_key_checking, section: defaults}
type: boolean
HOST_PATTERN_MISMATCH:
name: Control host pattern mismatch behaviour
default: 'warning'
description: This setting changes the behaviour of mismatched host patterns, it allows you to force a fatal error, a warning or just ignore it
env: [{name: ANSIBLE_HOST_PATTERN_MISMATCH}]
ini:
- {key: host_pattern_mismatch, section: inventory}
choices:
<<: *basic_error
version_added: "2.8"
INTERPRETER_PYTHON:
name: Python interpreter path (or automatic discovery behavior) used for module execution
default: auto
env: [{name: ANSIBLE_PYTHON_INTERPRETER}]
ini:
- {key: interpreter_python, section: defaults}
vars:
- {name: ansible_python_interpreter}
version_added: "2.8"
description:
- Path to the Python interpreter to be used for module execution on remote targets, or an automatic discovery mode.
Supported discovery modes are ``auto`` (the default), ``auto_silent``, ``auto_legacy``, and ``auto_legacy_silent``.
All discovery modes employ a lookup table to use the included system Python (on distributions known to include one),
falling back to a fixed ordered list of well-known Python interpreter locations if a platform-specific default is not
available. The fallback behavior will issue a warning that the interpreter should be set explicitly (since interpreters
installed later may change which one is used). This warning behavior can be disabled by setting ``auto_silent`` or
``auto_legacy_silent``. The value of ``auto_legacy`` provides all the same behavior, but for backwards-compatibility
with older Ansible releases that always defaulted to ``/usr/bin/python``, will use that interpreter if present.
_INTERPRETER_PYTHON_DISTRO_MAP:
name: Mapping of known included platform pythons for various Linux distros
default:
redhat:
'6': /usr/bin/python
'8': /usr/libexec/platform-python
'9': /usr/bin/python3
debian:
'8': /usr/bin/python
'10': /usr/bin/python3
fedora:
'23': /usr/bin/python3
ubuntu:
'14': /usr/bin/python
'16': /usr/bin/python3
version_added: "2.8"
# FUTURE: add inventory override once we're sure it can't be abused by a rogue target
# FUTURE: add a platform layer to the map so we could use for, eg, freebsd/macos/etc?
INTERPRETER_PYTHON_FALLBACK:
name: Ordered list of Python interpreters to check for in discovery
default:
- python3.11
- python3.10
- python3.9
- python3.8
- python3.7
- python3.6
- python3.5
- /usr/bin/python3
- /usr/libexec/platform-python
- python2.7
- /usr/bin/python
- python
vars:
- name: ansible_interpreter_python_fallback
type: list
version_added: "2.8"
TRANSFORM_INVALID_GROUP_CHARS:
name: Transform invalid characters in group names
default: 'never'
description:
- Make ansible transform invalid characters in group names supplied by inventory sources.
env: [{name: ANSIBLE_TRANSFORM_INVALID_GROUP_CHARS}]
ini:
- {key: force_valid_group_names, section: defaults}
type: string
choices:
always: it will replace any invalid characters with '_' (underscore) and warn the user
never: it will allow for the group name but warn about the issue
ignore: it does the same as 'never', without issuing a warning
silently: it does the same as 'always', without issuing a warning
version_added: '2.8'
INVALID_TASK_ATTRIBUTE_FAILED:
name: Controls whether invalid attributes for a task result in errors instead of warnings
default: True
description: If 'false', invalid attributes for a task will result in warnings instead of errors
type: boolean
env:
- name: ANSIBLE_INVALID_TASK_ATTRIBUTE_FAILED
ini:
- key: invalid_task_attribute_failed
section: defaults
version_added: "2.7"
INVENTORY_ANY_UNPARSED_IS_FAILED:
name: Controls whether any unparseable inventory source is a fatal error
default: False
description: >
If 'true', it is a fatal error when any given inventory source
cannot be successfully parsed by any available inventory plugin;
otherwise, this situation only attracts a warning.
type: boolean
env: [{name: ANSIBLE_INVENTORY_ANY_UNPARSED_IS_FAILED}]
ini:
- {key: any_unparsed_is_failed, section: inventory}
version_added: "2.7"
INVENTORY_CACHE_ENABLED:
name: Inventory caching enabled
default: False
description:
- Toggle to turn on inventory caching.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE}]
ini:
- {key: cache, section: inventory}
type: bool
INVENTORY_CACHE_PLUGIN:
name: Inventory cache plugin
description:
- The plugin for caching inventory.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN}]
ini:
- {key: cache_plugin, section: inventory}
INVENTORY_CACHE_PLUGIN_CONNECTION:
name: Inventory cache plugin URI to override the defaults section
description:
- The inventory cache connection.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_CONNECTION}]
ini:
- {key: cache_connection, section: inventory}
INVENTORY_CACHE_PLUGIN_PREFIX:
name: Inventory cache plugin table prefix
description:
- The table prefix for the cache plugin.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
env: [{name: ANSIBLE_INVENTORY_CACHE_PLUGIN_PREFIX}]
default: ansible_inventory_
ini:
- {key: cache_prefix, section: inventory}
INVENTORY_CACHE_TIMEOUT:
name: Inventory cache plugin expiration timeout
description:
- Expiration timeout for the inventory cache plugin data.
- This setting has been moved to the individual inventory plugins as a plugin option :ref:`inventory_plugins`.
- The existing configuration settings are still accepted with the inventory plugin adding additional options from inventory and fact cache configuration.
- This message will be removed in 2.16.
default: 3600
env: [{name: ANSIBLE_INVENTORY_CACHE_TIMEOUT}]
ini:
- {key: cache_timeout, section: inventory}
INVENTORY_ENABLED:
name: Active Inventory plugins
default: ['host_list', 'script', 'auto', 'yaml', 'ini', 'toml']
description: List of enabled inventory plugins, it also determines the order in which they are used.
env: [{name: ANSIBLE_INVENTORY_ENABLED}]
ini:
- {key: enable_plugins, section: inventory}
type: list
INVENTORY_EXPORT:
name: Set ansible-inventory into export mode
default: False
description: Controls if ansible-inventory will accurately reflect Ansible's view into inventory or its optimized for exporting.
env: [{name: ANSIBLE_INVENTORY_EXPORT}]
ini:
- {key: export, section: inventory}
type: bool
INVENTORY_IGNORE_EXTS:
name: Inventory ignore extensions
default: "{{(REJECT_EXTS + ('.orig', '.ini', '.cfg', '.retry'))}}"
description: List of extensions to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE}]
ini:
- {key: inventory_ignore_extensions, section: defaults}
- {key: ignore_extensions, section: inventory}
type: list
INVENTORY_IGNORE_PATTERNS:
name: Inventory ignore patterns
default: []
description: List of patterns to ignore when using a directory as an inventory source
env: [{name: ANSIBLE_INVENTORY_IGNORE_REGEX}]
ini:
- {key: inventory_ignore_patterns, section: defaults}
- {key: ignore_patterns, section: inventory}
type: list
INVENTORY_UNPARSED_IS_FAILED:
name: Unparsed Inventory failure
default: False
description: >
If 'true' it is a fatal error if every single potential inventory
source fails to parse, otherwise this situation will only attract a
warning.
env: [{name: ANSIBLE_INVENTORY_UNPARSED_FAILED}]
ini:
- {key: unparsed_is_failed, section: inventory}
type: bool
JINJA2_NATIVE_WARNING:
name: Running older than required Jinja version for jinja2_native warning
default: True
description: Toggle to control showing warnings related to running a Jinja version
older than required for jinja2_native
env:
- name: ANSIBLE_JINJA2_NATIVE_WARNING
deprecated:
why: This option is no longer used in the Ansible Core code base.
version: "2.17"
ini:
- {key: jinja2_native_warning, section: defaults}
type: boolean
MAX_FILE_SIZE_FOR_DIFF:
name: Diff maximum file size
default: 104448
description: Maximum size of files to be considered for diff display
env: [{name: ANSIBLE_MAX_DIFF_SIZE}]
ini:
- {key: max_diff_size, section: defaults}
type: int
NETWORK_GROUP_MODULES:
name: Network module families
default: [eos, nxos, ios, iosxr, junos, enos, ce, vyos, sros, dellos9, dellos10, dellos6, asa, aruba, aireos, bigip, ironware, onyx, netconf, exos, voss, slxos]
description: 'TODO: write it'
env:
- name: ANSIBLE_NETWORK_GROUP_MODULES
ini:
- {key: network_group_modules, section: defaults}
type: list
yaml: {key: defaults.network_group_modules}
INJECT_FACTS_AS_VARS:
default: True
description:
- Facts are available inside the `ansible_facts` variable, this setting also pushes them as their own vars in the main namespace.
- Unlike inside the `ansible_facts` dictionary, these will have an `ansible_` prefix.
env: [{name: ANSIBLE_INJECT_FACT_VARS}]
ini:
- {key: inject_facts_as_vars, section: defaults}
type: boolean
version_added: "2.5"
MODULE_IGNORE_EXTS:
name: Module ignore extensions
default: "{{(REJECT_EXTS + ('.yaml', '.yml', '.ini'))}}"
description:
- List of extensions to ignore when looking for modules to load
- This is for rejecting script and binary module fallback extensions
env: [{name: ANSIBLE_MODULE_IGNORE_EXTS}]
ini:
- {key: module_ignore_exts, section: defaults}
type: list
OLD_PLUGIN_CACHE_CLEARING:
description: Previously Ansible would only clear some of the plugin loading caches when loading new roles, this led to some behaviours in which a plugin loaded in prevoius plays would be unexpectedly 'sticky'. This setting allows to return to that behaviour.
env: [{name: ANSIBLE_OLD_PLUGIN_CACHE_CLEAR}]
ini:
- {key: old_plugin_cache_clear, section: defaults}
type: boolean
default: False
version_added: "2.8"
PARAMIKO_HOST_KEY_AUTO_ADD:
# TODO: move to plugin
default: False
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD}]
ini:
- {key: host_key_auto_add, section: paramiko_connection}
type: boolean
PARAMIKO_LOOK_FOR_KEYS:
name: look for keys
default: True
description: 'TODO: write it'
env: [{name: ANSIBLE_PARAMIKO_LOOK_FOR_KEYS}]
ini:
- {key: look_for_keys, section: paramiko_connection}
type: boolean
PERSISTENT_CONTROL_PATH_DIR:
name: Persistence socket path
default: '{{ ANSIBLE_HOME ~ "/pc" }}'
description: Path to socket to be used by the connection persistence system.
env: [{name: ANSIBLE_PERSISTENT_CONTROL_PATH_DIR}]
ini:
- {key: control_path_dir, section: persistent_connection}
type: path
PERSISTENT_CONNECT_TIMEOUT:
name: Persistence timeout
default: 30
description: This controls how long the persistent connection will remain idle before it is destroyed.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_TIMEOUT}]
ini:
- {key: connect_timeout, section: persistent_connection}
type: integer
PERSISTENT_CONNECT_RETRY_TIMEOUT:
name: Persistence connection retry timeout
default: 15
description: This controls the retry timeout for persistent connection to connect to the local domain socket.
env: [{name: ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT}]
ini:
- {key: connect_retry_timeout, section: persistent_connection}
type: integer
PERSISTENT_COMMAND_TIMEOUT:
name: Persistence command timeout
default: 30
description: This controls the amount of time to wait for response from remote device before timing out persistent connection.
env: [{name: ANSIBLE_PERSISTENT_COMMAND_TIMEOUT}]
ini:
- {key: command_timeout, section: persistent_connection}
type: int
PLAYBOOK_DIR:
name: playbook dir override for non-playbook CLIs (ala --playbook-dir)
version_added: "2.9"
description:
- A number of non-playbook CLIs have a ``--playbook-dir`` argument; this sets the default value for it.
env: [{name: ANSIBLE_PLAYBOOK_DIR}]
ini: [{key: playbook_dir, section: defaults}]
type: path
PLAYBOOK_VARS_ROOT:
name: playbook vars files root
default: top
version_added: "2.4.1"
description:
- This sets which playbook dirs will be used as a root to process vars plugins, which includes finding host_vars/group_vars
env: [{name: ANSIBLE_PLAYBOOK_VARS_ROOT}]
ini:
- {key: playbook_vars_root, section: defaults}
choices:
top: follows the traditional behavior of using the top playbook in the chain to find the root directory.
bottom: follows the 2.4.0 behavior of using the current playbook to find the root directory.
all: examines from the first parent to the current playbook.
PLUGIN_FILTERS_CFG:
name: Config file for limiting valid plugins
default: null
version_added: "2.5.0"
description:
- "A path to configuration for filtering which plugins installed on the system are allowed to be used."
- "See :ref:`plugin_filtering_config` for details of the filter file's format."
- " The default is /etc/ansible/plugin_filters.yml"
ini:
- key: plugin_filters_cfg
section: defaults
type: path
PYTHON_MODULE_RLIMIT_NOFILE:
name: Adjust maximum file descriptor soft limit during Python module execution
description:
- Attempts to set RLIMIT_NOFILE soft limit to the specified value when executing Python modules (can speed up subprocess usage on
Python 2.x. See https://bugs.python.org/issue11284). The value will be limited by the existing hard limit. Default
value of 0 does not attempt to adjust existing system-defined limits.
default: 0
env:
- {name: ANSIBLE_PYTHON_MODULE_RLIMIT_NOFILE}
ini:
- {key: python_module_rlimit_nofile, section: defaults}
vars:
- {name: ansible_python_module_rlimit_nofile}
version_added: '2.8'
RETRY_FILES_ENABLED:
name: Retry files
default: False
description: This controls whether a failed Ansible playbook should create a .retry file.
env: [{name: ANSIBLE_RETRY_FILES_ENABLED}]
ini:
- {key: retry_files_enabled, section: defaults}
type: bool
RETRY_FILES_SAVE_PATH:
name: Retry files path
default: ~
description:
- This sets the path in which Ansible will save .retry files when a playbook fails and retry files are enabled.
- This file will be overwritten after each run with the list of failed hosts from all plays.
env: [{name: ANSIBLE_RETRY_FILES_SAVE_PATH}]
ini:
- {key: retry_files_save_path, section: defaults}
type: path
RUN_VARS_PLUGINS:
name: When should vars plugins run relative to inventory
default: demand
description:
- This setting can be used to optimize vars_plugin usage depending on user's inventory size and play selection.
env: [{name: ANSIBLE_RUN_VARS_PLUGINS}]
ini:
- {key: run_vars_plugins, section: defaults}
type: str
choices:
demand: will run vars_plugins relative to inventory sources anytime vars are 'demanded' by tasks.
start: will run vars_plugins relative to inventory sources after importing that inventory source.
version_added: "2.10"
SHOW_CUSTOM_STATS:
name: Display custom stats
default: False
description: 'This adds the custom stats set via the set_stats plugin to the default output'
env: [{name: ANSIBLE_SHOW_CUSTOM_STATS}]
ini:
- {key: show_custom_stats, section: defaults}
type: bool
STRING_TYPE_FILTERS:
name: Filters to preserve strings
default: [string, to_json, to_nice_json, to_yaml, to_nice_yaml, ppretty, json]
description:
- "This list of filters avoids 'type conversion' when templating variables"
- Useful when you want to avoid conversion into lists or dictionaries for JSON strings, for example.
env: [{name: ANSIBLE_STRING_TYPE_FILTERS}]
ini:
- {key: dont_type_filters, section: jinja2}
type: list
SYSTEM_WARNINGS:
name: System warnings
default: True
description:
- Allows disabling of warnings related to potential issues on the system running ansible itself (not on the managed hosts)
- These may include warnings about 3rd party packages or other conditions that should be resolved if possible.
env: [{name: ANSIBLE_SYSTEM_WARNINGS}]
ini:
- {key: system_warnings, section: defaults}
type: boolean
TAGS_RUN:
name: Run Tags
default: []
type: list
description: default list of tags to run in your plays, Skip Tags has precedence.
env: [{name: ANSIBLE_RUN_TAGS}]
ini:
- {key: run, section: tags}
version_added: "2.5"
TAGS_SKIP:
name: Skip Tags
default: []
type: list
description: default list of tags to skip in your plays, has precedence over Run Tags
env: [{name: ANSIBLE_SKIP_TAGS}]
ini:
- {key: skip, section: tags}
version_added: "2.5"
TASK_TIMEOUT:
name: Task Timeout
default: 0
description:
- Set the maximum time (in seconds) that a task can run for.
- If set to 0 (the default) there is no timeout.
env: [{name: ANSIBLE_TASK_TIMEOUT}]
ini:
- {key: task_timeout, section: defaults}
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_COUNT:
name: Worker Shutdown Poll Count
default: 0
description:
- The maximum number of times to check Task Queue Manager worker processes to verify they have exited cleanly.
- After this limit is reached any worker processes still running will be terminated.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_COUNT}]
type: integer
version_added: '2.10'
WORKER_SHUTDOWN_POLL_DELAY:
name: Worker Shutdown Poll Delay
default: 0.1
description:
- The number of seconds to sleep between polling loops when checking Task Queue Manager worker processes to verify they have exited cleanly.
- This is for internal use only.
env: [{name: ANSIBLE_WORKER_SHUTDOWN_POLL_DELAY}]
type: float
version_added: '2.10'
USE_PERSISTENT_CONNECTIONS:
name: Persistence
default: False
description: Toggles the use of persistence for connections.
env: [{name: ANSIBLE_USE_PERSISTENT_CONNECTIONS}]
ini:
- {key: use_persistent_connections, section: defaults}
type: boolean
VARIABLE_PLUGINS_ENABLED:
name: Vars plugin enabled list
default: ['host_group_vars']
description: Accept list for variable plugins that require it.
env: [{name: ANSIBLE_VARS_ENABLED}]
ini:
- {key: vars_plugins_enabled, section: defaults}
type: list
version_added: "2.10"
VARIABLE_PRECEDENCE:
name: Group variable precedence
default: ['all_inventory', 'groups_inventory', 'all_plugins_inventory', 'all_plugins_play', 'groups_plugins_inventory', 'groups_plugins_play']
description: Allows to change the group variable precedence merge order.
env: [{name: ANSIBLE_PRECEDENCE}]
ini:
- {key: precedence, section: defaults}
type: list
version_added: "2.4"
WIN_ASYNC_STARTUP_TIMEOUT:
name: Windows Async Startup Timeout
default: 5
description:
- For asynchronous tasks in Ansible (covered in Asynchronous Actions and Polling),
this is how long, in seconds, to wait for the task spawned by Ansible to connect back to the named pipe used
on Windows systems. The default is 5 seconds. This can be too low on slower systems, or systems under heavy load.
- This is not the total time an async command can run for, but is a separate timeout to wait for an async command to
start. The task will only start to be timed against its async_timeout once it has connected to the pipe, so the
overall maximum duration the task can take will be extended by the amount specified here.
env: [{name: ANSIBLE_WIN_ASYNC_STARTUP_TIMEOUT}]
ini:
- {key: win_async_startup_timeout, section: defaults}
type: integer
vars:
- {name: ansible_win_async_startup_timeout}
version_added: '2.10'
YAML_FILENAME_EXTENSIONS:
name: Valid YAML extensions
default: [".yml", ".yaml", ".json"]
description:
- "Check all of these extensions when looking for 'variable' files which should be YAML or JSON or vaulted versions of these."
- 'This affects vars_files, include_vars, inventory and vars plugins among others.'
env:
- name: ANSIBLE_YAML_FILENAME_EXT
ini:
- section: defaults
key: yaml_valid_extensions
type: list
NETCONF_SSH_CONFIG:
description: This variable is used to enable bastion/jump host with netconf connection. If set to True the bastion/jump
host ssh settings should be present in ~/.ssh/config file, alternatively it can be set
to custom ssh configuration file path to read the bastion/jump host settings.
env: [{name: ANSIBLE_NETCONF_SSH_CONFIG}]
ini:
- {key: ssh_config, section: netconf_connection}
yaml: {key: netconf_connection.ssh_config}
default: null
STRING_CONVERSION_ACTION:
version_added: '2.8'
description:
- Action to take when a module parameter value is converted to a string (this does not affect variables).
For string parameters, values such as '1.00', "['a', 'b',]", and 'yes', 'y', etc.
will be converted by the YAML parser unless fully quoted.
- Valid options are 'error', 'warn', and 'ignore'.
- Since 2.8, this option defaults to 'warn' but will change to 'error' in 2.12.
default: 'warn'
env:
- name: ANSIBLE_STRING_CONVERSION_ACTION
ini:
- section: defaults
key: string_conversion_action
type: string
VALIDATE_ACTION_GROUP_METADATA:
version_added: '2.12'
description:
- A toggle to disable validating a collection's 'metadata' entry for a module_defaults action group.
Metadata containing unexpected fields or value types will produce a warning when this is True.
default: True
env: [{name: ANSIBLE_VALIDATE_ACTION_GROUP_METADATA}]
ini:
- section: defaults
key: validate_action_group_metadata
type: bool
VERBOSE_TO_STDERR:
version_added: '2.8'
description:
- Force 'verbose' option to use stderr instead of stdout
default: False
env:
- name: ANSIBLE_VERBOSE_TO_STDERR
ini:
- section: defaults
key: verbose_to_stderr
type: bool
...
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,795 |
Dynamic inventory slow on large inventories
|
### Summary
Dynamic inventories have grown particularly slow when they load up a large number of machines. Our inventory of 10k+ nodes is taking a little over 60 seconds from command start to the first ansible task running - assuming a fully cached local inventory which skips the fetch calls.
The issue appears to have started somewhere around Ansible 4.
Edit: Updated to highlight the core 2.11.11 -> 2.11.12 2x speed increase. I'm still searching for a way to find where the 3.4 -> 4.0 one may be.
```
ansible 3.4.0
ansible-base 2.10.17
7 seconds
ansible 4.0.0
ansible-core 2.11.11
19 seconds
ansible 4.0.0
ansible-core 2.11.12
46 seconds
ansible 4.10.0
ansible-core 2.11.12
41 seconds
ansible 6.4.0
ansible-core 2.13.4
73 seconds - Note: this is longer as caching is not working due to a separate issue
47 seconds (with the fetch step excluded)
```
The V4 -> V6 increase is likely #78767 but this wouldn't explain the 4x jump from V3 -> V4.
### Issue Type
Bug Report
### Component Name
inventory
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CALLBACKS_ENABLED(/usr/share/ansible/ansible.cfg) = ['syslog', 'timer', 'yaml']
COLLECTIONS_PATHS(/usr/share/ansible/ansible.cfg) = ['/usr/lib/python3.8/site-packages/ansible_collections']
COLOR_VERBOSE(/usr/share/ansible/ansible.cfg) = bright blue
DEFAULT_ACTION_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_FILTER_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/filter']
DEFAULT_GATHERING(/usr/share/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/inventory/consort.yml']
DEFAULT_INVENTORY_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/usr/share/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/usr/share/ansible/ansible.cfg) = /root/.ansible/logs/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/lookup']
DEFAULT_MODULE_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/library']
DEFAULT_MODULE_UTILS_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/module_utils']
DEFAULT_ROLES_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/usr/share/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/usr/share/ansible/ansible.cfg) = 15
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.vault_pass.key
INVENTORY_CACHE_PLUGIN(/usr/share/ansible/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
RETRY_FILES_ENABLED(/usr/share/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/usr/share/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/usr/share/ansible/ansible.cfg) = True
ssh_args(/usr/share/ansible/ansible.cfg) = -C -o ServerAliveInterval=5 -o ServerAliveCountMax=2 -o ControlMaster=auto -o ControlPersist=120s -o PreferredAuthentications=publickey,password -o UserKnownHostsFile=/dev/null -o StrictHostKe
timeout(/usr/share/ansible/ansible.cfg) = 15
INVENTORY:
=========
consort:
_______
cache_connection(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
cache_plugin(/usr/share/ansible/ansible.cfg) = jsonfile
```
### OS / Environment
Centos 7 Docker container
### Steps to Reproduce
Example timings from the hosts
```
ansible 3.4.0
ansible-base 2.10.17
Averages around 7 seconds to process the inventory and start tasks
# time ansible localhost -vvvv --list-hosts
ansible 2.10.17
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
Using /usr/share/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3337ms
inventory/consort parse took 4046ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m12.943s
user 0m10.350s
sys 0m1.612s
#######
ansible 4.10.0
ansible-core 2.11.12
# time ansible localhost -vvvv --list-hosts
ansible [core 2.11.12]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3284ms
inventory/consort parse took 3972ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m36.764s
user 0m34.052s
sys 0m1.802s
########
ansible 6.4.0
ansible-core 2.13.4
# time ansible localhost -vvvv --list-hosts
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
Using inventory plugin 'consort' to process inventory source '/usr/share/ansible/inventory/consort.yml'
inventory/consort _populate took 3272ms
inventory/consort parse took 3902ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m39.021s
user 0m35.999s
sys 0m1.989s
```
### Expected Results
Less of an increase in the inventory parsing between the versions.
### Actual Results
```console
Long startup times processing the inventory data on every command.
Possibly not related but we did see a behaviour changing during the post populate stage where ansible was doing a lot of calls to `host_group_vars`
2026 1662134089.85213: Loading VarsModule 'host_group_vars' from /usr/lib/python3.8/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
2026 1662134089.86151: Loading ModuleDocFragment 'vars_plugin_staging' from /usr/lib/python3.8/site-packages/ansible/plugins/doc_fragments/vars_plugin_staging.py (found_in_cache=True, class_only=False)
2026 1662134089.86523: Loaded config def from plugin (vars/host_group_vars)
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78795
|
https://github.com/ansible/ansible/pull/78859
|
71adb02142dbbd8f3c083ab2921f0d4b651edf64
|
4115ddd135a0445092c9f9a7b5904942ceedd57c
| 2022-09-16T15:14:09Z |
python
| 2022-09-27T15:34:59Z |
changelogs/fragments/plugin_loader_fix.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,795 |
Dynamic inventory slow on large inventories
|
### Summary
Dynamic inventories have grown particularly slow when they load up a large number of machines. Our inventory of 10k+ nodes is taking a little over 60 seconds from command start to the first ansible task running - assuming a fully cached local inventory which skips the fetch calls.
The issue appears to have started somewhere around Ansible 4.
Edit: Updated to highlight the core 2.11.11 -> 2.11.12 2x speed increase. I'm still searching for a way to find where the 3.4 -> 4.0 one may be.
```
ansible 3.4.0
ansible-base 2.10.17
7 seconds
ansible 4.0.0
ansible-core 2.11.11
19 seconds
ansible 4.0.0
ansible-core 2.11.12
46 seconds
ansible 4.10.0
ansible-core 2.11.12
41 seconds
ansible 6.4.0
ansible-core 2.13.4
73 seconds - Note: this is longer as caching is not working due to a separate issue
47 seconds (with the fetch step excluded)
```
The V4 -> V6 increase is likely #78767 but this wouldn't explain the 4x jump from V3 -> V4.
### Issue Type
Bug Report
### Component Name
inventory
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CALLBACKS_ENABLED(/usr/share/ansible/ansible.cfg) = ['syslog', 'timer', 'yaml']
COLLECTIONS_PATHS(/usr/share/ansible/ansible.cfg) = ['/usr/lib/python3.8/site-packages/ansible_collections']
COLOR_VERBOSE(/usr/share/ansible/ansible.cfg) = bright blue
DEFAULT_ACTION_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_FILTER_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/filter']
DEFAULT_GATHERING(/usr/share/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/inventory/consort.yml']
DEFAULT_INVENTORY_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/usr/share/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/usr/share/ansible/ansible.cfg) = /root/.ansible/logs/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/lookup']
DEFAULT_MODULE_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/library']
DEFAULT_MODULE_UTILS_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/module_utils']
DEFAULT_ROLES_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/usr/share/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/usr/share/ansible/ansible.cfg) = 15
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.vault_pass.key
INVENTORY_CACHE_PLUGIN(/usr/share/ansible/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
RETRY_FILES_ENABLED(/usr/share/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/usr/share/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/usr/share/ansible/ansible.cfg) = True
ssh_args(/usr/share/ansible/ansible.cfg) = -C -o ServerAliveInterval=5 -o ServerAliveCountMax=2 -o ControlMaster=auto -o ControlPersist=120s -o PreferredAuthentications=publickey,password -o UserKnownHostsFile=/dev/null -o StrictHostKe
timeout(/usr/share/ansible/ansible.cfg) = 15
INVENTORY:
=========
consort:
_______
cache_connection(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
cache_plugin(/usr/share/ansible/ansible.cfg) = jsonfile
```
### OS / Environment
Centos 7 Docker container
### Steps to Reproduce
Example timings from the hosts
```
ansible 3.4.0
ansible-base 2.10.17
Averages around 7 seconds to process the inventory and start tasks
# time ansible localhost -vvvv --list-hosts
ansible 2.10.17
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
Using /usr/share/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3337ms
inventory/consort parse took 4046ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m12.943s
user 0m10.350s
sys 0m1.612s
#######
ansible 4.10.0
ansible-core 2.11.12
# time ansible localhost -vvvv --list-hosts
ansible [core 2.11.12]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3284ms
inventory/consort parse took 3972ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m36.764s
user 0m34.052s
sys 0m1.802s
########
ansible 6.4.0
ansible-core 2.13.4
# time ansible localhost -vvvv --list-hosts
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
Using inventory plugin 'consort' to process inventory source '/usr/share/ansible/inventory/consort.yml'
inventory/consort _populate took 3272ms
inventory/consort parse took 3902ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m39.021s
user 0m35.999s
sys 0m1.989s
```
### Expected Results
Less of an increase in the inventory parsing between the versions.
### Actual Results
```console
Long startup times processing the inventory data on every command.
Possibly not related but we did see a behaviour changing during the post populate stage where ansible was doing a lot of calls to `host_group_vars`
2026 1662134089.85213: Loading VarsModule 'host_group_vars' from /usr/lib/python3.8/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
2026 1662134089.86151: Loading ModuleDocFragment 'vars_plugin_staging' from /usr/lib/python3.8/site-packages/ansible/plugins/doc_fragments/vars_plugin_staging.py (found_in_cache=True, class_only=False)
2026 1662134089.86523: Loaded config def from plugin (vars/host_group_vars)
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78795
|
https://github.com/ansible/ansible/pull/78859
|
71adb02142dbbd8f3c083ab2921f0d4b651edf64
|
4115ddd135a0445092c9f9a7b5904942ceedd57c
| 2022-09-16T15:14:09Z |
python
| 2022-09-27T15:34:59Z |
lib/ansible/config/manager.py
|
# Copyright: (c) 2017, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import atexit
import configparser
import os
import os.path
import sys
import stat
import tempfile
import traceback
from collections import namedtuple
from collections.abc import Mapping, Sequence
from jinja2.nativetypes import NativeEnvironment
from ansible.errors import AnsibleOptionsError, AnsibleError
from ansible.module_utils._text import to_text, to_bytes, to_native
from ansible.module_utils.common.yaml import yaml_load
from ansible.module_utils.six import string_types
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.parsing.quoting import unquote
from ansible.parsing.yaml.objects import AnsibleVaultEncryptedUnicode
from ansible.utils import py3compat
from ansible.utils.path import cleanup_tmp_file, makedirs_safe, unfrackpath
Plugin = namedtuple('Plugin', 'name type')
Setting = namedtuple('Setting', 'name value origin type')
INTERNAL_DEFS = {'lookup': ('_terms',)}
def _get_entry(plugin_type, plugin_name, config):
''' construct entry for requested config '''
entry = ''
if plugin_type:
entry += 'plugin_type: %s ' % plugin_type
if plugin_name:
entry += 'plugin: %s ' % plugin_name
entry += 'setting: %s ' % config
return entry
# FIXME: see if we can unify in module_utils with similar function used by argspec
def ensure_type(value, value_type, origin=None):
''' return a configuration variable with casting
:arg value: The value to ensure correct typing of
:kwarg value_type: The type of the value. This can be any of the following strings:
:boolean: sets the value to a True or False value
:bool: Same as 'boolean'
:integer: Sets the value to an integer or raises a ValueType error
:int: Same as 'integer'
:float: Sets the value to a float or raises a ValueType error
:list: Treats the value as a comma separated list. Split the value
and return it as a python list.
:none: Sets the value to None
:path: Expands any environment variables and tilde's in the value.
:tmppath: Create a unique temporary directory inside of the directory
specified by value and return its path.
:temppath: Same as 'tmppath'
:tmp: Same as 'tmppath'
:pathlist: Treat the value as a typical PATH string. (On POSIX, this
means colon separated strings.) Split the value and then expand
each part for environment variables and tildes.
:pathspec: Treat the value as a PATH string. Expands any environment variables
tildes's in the value.
:str: Sets the value to string types.
:string: Same as 'str'
'''
errmsg = ''
basedir = None
if origin and os.path.isabs(origin) and os.path.exists(to_bytes(origin)):
basedir = origin
if value_type:
value_type = value_type.lower()
if value is not None:
if value_type in ('boolean', 'bool'):
value = boolean(value, strict=False)
elif value_type in ('integer', 'int'):
value = int(value)
elif value_type == 'float':
value = float(value)
elif value_type == 'list':
if isinstance(value, string_types):
value = [unquote(x.strip()) for x in value.split(',')]
elif not isinstance(value, Sequence):
errmsg = 'list'
elif value_type == 'none':
if value == "None":
value = None
if value is not None:
errmsg = 'None'
elif value_type == 'path':
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
else:
errmsg = 'path'
elif value_type in ('tmp', 'temppath', 'tmppath'):
if isinstance(value, string_types):
value = resolve_path(value, basedir=basedir)
if not os.path.exists(value):
makedirs_safe(value, 0o700)
prefix = 'ansible-local-%s' % os.getpid()
value = tempfile.mkdtemp(prefix=prefix, dir=value)
atexit.register(cleanup_tmp_file, value, warn=True)
else:
errmsg = 'temppath'
elif value_type == 'pathspec':
if isinstance(value, string_types):
value = value.split(os.pathsep)
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathspec'
elif value_type == 'pathlist':
if isinstance(value, string_types):
value = [x.strip() for x in value.split(',')]
if isinstance(value, Sequence):
value = [resolve_path(x, basedir=basedir) for x in value]
else:
errmsg = 'pathlist'
elif value_type in ('dict', 'dictionary'):
if not isinstance(value, Mapping):
errmsg = 'dictionary'
elif value_type in ('str', 'string'):
if isinstance(value, (string_types, AnsibleVaultEncryptedUnicode, bool, int, float, complex)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
else:
errmsg = 'string'
# defaults to string type
elif isinstance(value, (string_types, AnsibleVaultEncryptedUnicode)):
value = unquote(to_text(value, errors='surrogate_or_strict'))
if errmsg:
raise ValueError('Invalid type provided for "%s": %s' % (errmsg, to_native(value)))
return to_text(value, errors='surrogate_or_strict', nonstring='passthru')
# FIXME: see if this can live in utils/path
def resolve_path(path, basedir=None):
''' resolve relative or 'variable' paths '''
if '{{CWD}}' in path: # allow users to force CWD using 'magic' {{CWD}}
path = path.replace('{{CWD}}', os.getcwd())
return unfrackpath(path, follow=False, basedir=basedir)
# FIXME: generic file type?
def get_config_type(cfile):
ftype = None
if cfile is not None:
ext = os.path.splitext(cfile)[-1]
if ext in ('.ini', '.cfg'):
ftype = 'ini'
elif ext in ('.yaml', '.yml'):
ftype = 'yaml'
else:
raise AnsibleOptionsError("Unsupported configuration file extension for %s: %s" % (cfile, to_native(ext)))
return ftype
# FIXME: can move to module_utils for use for ini plugins also?
def get_ini_config_value(p, entry):
''' returns the value of last ini entry found '''
value = None
if p is not None:
try:
value = p.get(entry.get('section', 'defaults'), entry.get('key', ''), raw=True)
except Exception: # FIXME: actually report issues here
pass
return value
def find_ini_config_file(warnings=None):
''' Load INI Config File order(first found is used): ENV, CWD, HOME, /etc/ansible '''
# FIXME: eventually deprecate ini configs
if warnings is None:
# Note: In this case, warnings does nothing
warnings = set()
# A value that can never be a valid path so that we can tell if ANSIBLE_CONFIG was set later
# We can't use None because we could set path to None.
SENTINEL = object
potential_paths = []
# Environment setting
path_from_env = os.getenv("ANSIBLE_CONFIG", SENTINEL)
if path_from_env is not SENTINEL:
path_from_env = unfrackpath(path_from_env, follow=False)
if os.path.isdir(to_bytes(path_from_env)):
path_from_env = os.path.join(path_from_env, "ansible.cfg")
potential_paths.append(path_from_env)
# Current working directory
warn_cmd_public = False
try:
cwd = os.getcwd()
perms = os.stat(cwd)
cwd_cfg = os.path.join(cwd, "ansible.cfg")
if perms.st_mode & stat.S_IWOTH:
# Working directory is world writable so we'll skip it.
# Still have to look for a file here, though, so that we know if we have to warn
if os.path.exists(cwd_cfg):
warn_cmd_public = True
else:
potential_paths.append(to_text(cwd_cfg, errors='surrogate_or_strict'))
except OSError:
# If we can't access cwd, we'll simply skip it as a possible config source
pass
# Per user location
potential_paths.append(unfrackpath("~/.ansible.cfg", follow=False))
# System location
potential_paths.append("/etc/ansible/ansible.cfg")
for path in potential_paths:
b_path = to_bytes(path)
if os.path.exists(b_path) and os.access(b_path, os.R_OK):
break
else:
path = None
# Emit a warning if all the following are true:
# * We did not use a config from ANSIBLE_CONFIG
# * There's an ansible.cfg in the current working directory that we skipped
if path_from_env != path and warn_cmd_public:
warnings.add(u"Ansible is being run in a world writable directory (%s),"
u" ignoring it as an ansible.cfg source."
u" For more information see"
u" https://docs.ansible.com/ansible/devel/reference_appendices/config.html#cfg-in-world-writable-dir"
% to_text(cwd))
return path
def _add_base_defs_deprecations(base_defs):
'''Add deprecation source 'ansible.builtin' to deprecations in base.yml'''
def process(entry):
if 'deprecated' in entry:
entry['deprecated']['collection_name'] = 'ansible.builtin'
for dummy, data in base_defs.items():
process(data)
for section in ('ini', 'env', 'vars'):
if section in data:
for entry in data[section]:
process(entry)
class ConfigManager(object):
DEPRECATED = [] # type: list[tuple[str, dict[str, str]]]
WARNINGS = set() # type: set[str]
def __init__(self, conf_file=None, defs_file=None):
self._base_defs = {}
self._plugins = {}
self._parsers = {}
self._config_file = conf_file
self._base_defs = self._read_config_yaml_file(defs_file or ('%s/base.yml' % os.path.dirname(__file__)))
_add_base_defs_deprecations(self._base_defs)
if self._config_file is None:
# set config using ini
self._config_file = find_ini_config_file(self.WARNINGS)
# consume configuration
if self._config_file:
# initialize parser and read config
self._parse_config_file()
# ensure we always have config def entry
self._base_defs['CONFIG_FILE'] = {'default': None, 'type': 'path'}
def _read_config_yaml_file(self, yml_file):
# TODO: handle relative paths as relative to the directory containing the current playbook instead of CWD
# Currently this is only used with absolute paths to the `ansible/config` directory
yml_file = to_bytes(yml_file)
if os.path.exists(yml_file):
with open(yml_file, 'rb') as config_def:
return yaml_load(config_def) or {}
raise AnsibleError(
"Missing base YAML definition file (bad install?): %s" % to_native(yml_file))
def _parse_config_file(self, cfile=None):
''' return flat configuration settings from file(s) '''
# TODO: take list of files with merge/nomerge
if cfile is None:
cfile = self._config_file
ftype = get_config_type(cfile)
if cfile is not None:
if ftype == 'ini':
self._parsers[cfile] = configparser.ConfigParser(inline_comment_prefixes=(';',))
with open(to_bytes(cfile), 'rb') as f:
try:
cfg_text = to_text(f.read(), errors='surrogate_or_strict')
except UnicodeError as e:
raise AnsibleOptionsError("Error reading config file(%s) because the config file was not utf8 encoded: %s" % (cfile, to_native(e)))
try:
self._parsers[cfile].read_string(cfg_text)
except configparser.Error as e:
raise AnsibleOptionsError("Error reading config file (%s): %s" % (cfile, to_native(e)))
# FIXME: this should eventually handle yaml config files
# elif ftype == 'yaml':
# with open(cfile, 'rb') as config_stream:
# self._parsers[cfile] = yaml_load(config_stream)
else:
raise AnsibleOptionsError("Unsupported configuration file type: %s" % to_native(ftype))
def _find_yaml_config_files(self):
''' Load YAML Config Files in order, check merge flags, keep origin of settings'''
pass
def get_plugin_options(self, plugin_type, name, keys=None, variables=None, direct=None):
options = {}
defs = self.get_configuration_definitions(plugin_type, name)
for option in defs:
options[option] = self.get_config_value(option, plugin_type=plugin_type, plugin_name=name, keys=keys, variables=variables, direct=direct)
return options
def get_plugin_vars(self, plugin_type, name):
pvars = []
for pdef in self.get_configuration_definitions(plugin_type, name).values():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
pvars.append(var_entry['name'])
return pvars
def get_plugin_options_from_var(self, plugin_type, name, variable):
options = []
for option_name, pdef in self.get_configuration_definitions(plugin_type, name).items():
if 'vars' in pdef and pdef['vars']:
for var_entry in pdef['vars']:
if variable == var_entry['name']:
options.append(option_name)
return options
def get_configuration_definition(self, name, plugin_type=None, plugin_name=None):
ret = {}
if plugin_type is None:
ret = self._base_defs.get(name, None)
elif plugin_name is None:
ret = self._plugins.get(plugin_type, {}).get(name, None)
else:
ret = self._plugins.get(plugin_type, {}).get(plugin_name, {}).get(name, None)
return ret
def get_configuration_definitions(self, plugin_type=None, name=None, ignore_private=False):
''' just list the possible settings, either base or for specific plugins or plugin '''
ret = {}
if plugin_type is None:
ret = self._base_defs
elif name is None:
ret = self._plugins.get(plugin_type, {})
else:
ret = self._plugins.get(plugin_type, {}).get(name, {})
if ignore_private:
for cdef in list(ret.keys()):
if cdef.startswith('_'):
del ret[cdef]
return ret
def _loop_entries(self, container, entry_list):
''' repeat code for value entry assignment '''
value = None
origin = None
for entry in entry_list:
name = entry.get('name')
try:
temp_value = container.get(name, None)
except UnicodeEncodeError:
self.WARNINGS.add(u'value for config entry {0} contains invalid characters, ignoring...'.format(to_text(name)))
continue
if temp_value is not None: # only set if entry is defined in container
# inline vault variables should be converted to a text string
if isinstance(temp_value, AnsibleVaultEncryptedUnicode):
temp_value = to_text(temp_value, errors='surrogate_or_strict')
value = temp_value
origin = name
# deal with deprecation of setting source, if used
if 'deprecated' in entry:
self.DEPRECATED.append((entry['name'], entry['deprecated']))
return value, origin
def get_config_value(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' wrapper '''
try:
value, _drop = self.get_config_value_and_origin(config, cfile=cfile, plugin_type=plugin_type, plugin_name=plugin_name,
keys=keys, variables=variables, direct=direct)
except AnsibleError:
raise
except Exception as e:
raise AnsibleError("Unhandled exception when retrieving %s:\n%s" % (config, to_native(e)), orig_exc=e)
return value
def get_config_value_and_origin(self, config, cfile=None, plugin_type=None, plugin_name=None, keys=None, variables=None, direct=None):
''' Given a config key figure out the actual value and report on the origin of the settings '''
if cfile is None:
# use default config
cfile = self._config_file
if config == 'CONFIG_FILE':
return cfile, ''
# Note: sources that are lists listed in low to high precedence (last one wins)
value = None
origin = None
defs = self.get_configuration_definitions(plugin_type, plugin_name)
if config in defs:
aliases = defs[config].get('aliases', [])
# direct setting via plugin arguments, can set to None so we bypass rest of processing/defaults
if direct:
if config in direct:
value = direct[config]
origin = 'Direct'
else:
direct_aliases = [direct[alias] for alias in aliases if alias in direct]
if direct_aliases:
value = direct_aliases[0]
origin = 'Direct'
if value is None and variables and defs[config].get('vars'):
# Use 'variable overrides' if present, highest precedence, but only present when querying running play
value, origin = self._loop_entries(variables, defs[config]['vars'])
origin = 'var: %s' % origin
# use playbook keywords if you have em
if value is None and defs[config].get('keyword') and keys:
value, origin = self._loop_entries(keys, defs[config]['keyword'])
origin = 'keyword: %s' % origin
# automap to keywords
# TODO: deprecate these in favor of explicit keyword above
if value is None and keys:
if config in keys:
value = keys[config]
keyword = config
elif aliases:
for alias in aliases:
if alias in keys:
value = keys[alias]
keyword = alias
break
if value is not None:
origin = 'keyword: %s' % keyword
if value is None and 'cli' in defs[config]:
# avoid circular import .. until valid
from ansible import context
value, origin = self._loop_entries(context.CLIARGS, defs[config]['cli'])
origin = 'cli: %s' % origin
# env vars are next precedence
if value is None and defs[config].get('env'):
value, origin = self._loop_entries(py3compat.environ, defs[config]['env'])
origin = 'env: %s' % origin
# try config file entries next, if we have one
if self._parsers.get(cfile, None) is None:
self._parse_config_file(cfile)
if value is None and cfile is not None:
ftype = get_config_type(cfile)
if ftype and defs[config].get(ftype):
if ftype == 'ini':
# load from ini config
try: # FIXME: generalize _loop_entries to allow for files also, most of this code is dupe
for ini_entry in defs[config]['ini']:
temp_value = get_ini_config_value(self._parsers[cfile], ini_entry)
if temp_value is not None:
value = temp_value
origin = cfile
if 'deprecated' in ini_entry:
self.DEPRECATED.append(('[%s]%s' % (ini_entry['section'], ini_entry['key']), ini_entry['deprecated']))
except Exception as e:
sys.stderr.write("Error while loading ini config %s: %s" % (cfile, to_native(e)))
elif ftype == 'yaml':
# FIXME: implement, also , break down key from defs (. notation???)
origin = cfile
# set default if we got here w/o a value
if value is None:
if defs[config].get('required', False):
if not plugin_type or config not in INTERNAL_DEFS.get(plugin_type, {}):
raise AnsibleError("No setting was provided for required configuration %s" %
to_native(_get_entry(plugin_type, plugin_name, config)))
else:
origin = 'default'
value = defs[config].get('default')
if isinstance(value, string_types) and (value.startswith('{{') and value.endswith('}}')) and variables is not None:
# template default values if possible
# NOTE: cannot use is_template due to circular dep
try:
t = NativeEnvironment().from_string(value)
value = t.render(variables)
except Exception:
pass # not templatable
# ensure correct type, can raise exceptions on mismatched types
try:
value = ensure_type(value, defs[config].get('type'), origin=origin)
except ValueError as e:
if origin.startswith('env:') and value == '':
# this is empty env var for non string so we can set to default
origin = 'default'
value = ensure_type(defs[config].get('default'), defs[config].get('type'), origin=origin)
else:
raise AnsibleOptionsError('Invalid type for configuration option %s (from %s): %s' %
(to_native(_get_entry(plugin_type, plugin_name, config)).strip(), origin, to_native(e)))
# deal with restricted values
if value is not None and 'choices' in defs[config] and defs[config]['choices'] is not None:
invalid_choices = True # assume the worst!
if defs[config].get('type') == 'list':
# for a list type, compare all values in type are allowed
invalid_choices = not all(choice in defs[config]['choices'] for choice in value)
else:
# these should be only the simple data types (string, int, bool, float, etc) .. ignore dicts for now
invalid_choices = value not in defs[config]['choices']
if invalid_choices:
if isinstance(defs[config]['choices'], Mapping):
valid = ', '.join([to_text(k) for k in defs[config]['choices'].keys()])
elif isinstance(defs[config]['choices'], string_types):
valid = defs[config]['choices']
elif isinstance(defs[config]['choices'], Sequence):
valid = ', '.join([to_text(c) for c in defs[config]['choices']])
else:
valid = defs[config]['choices']
raise AnsibleOptionsError('Invalid value "%s" for configuration option "%s", valid values are: %s' %
(value, to_native(_get_entry(plugin_type, plugin_name, config)), valid))
# deal with deprecation of the setting
if 'deprecated' in defs[config] and origin != 'default':
self.DEPRECATED.append((config, defs[config].get('deprecated')))
else:
raise AnsibleError('Requested entry (%s) was not defined in configuration.' % to_native(_get_entry(plugin_type, plugin_name, config)))
return value, origin
def initialize_plugin_configuration_definitions(self, plugin_type, name, defs):
if plugin_type not in self._plugins:
self._plugins[plugin_type] = {}
self._plugins[plugin_type][name] = defs
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,795 |
Dynamic inventory slow on large inventories
|
### Summary
Dynamic inventories have grown particularly slow when they load up a large number of machines. Our inventory of 10k+ nodes is taking a little over 60 seconds from command start to the first ansible task running - assuming a fully cached local inventory which skips the fetch calls.
The issue appears to have started somewhere around Ansible 4.
Edit: Updated to highlight the core 2.11.11 -> 2.11.12 2x speed increase. I'm still searching for a way to find where the 3.4 -> 4.0 one may be.
```
ansible 3.4.0
ansible-base 2.10.17
7 seconds
ansible 4.0.0
ansible-core 2.11.11
19 seconds
ansible 4.0.0
ansible-core 2.11.12
46 seconds
ansible 4.10.0
ansible-core 2.11.12
41 seconds
ansible 6.4.0
ansible-core 2.13.4
73 seconds - Note: this is longer as caching is not working due to a separate issue
47 seconds (with the fetch step excluded)
```
The V4 -> V6 increase is likely #78767 but this wouldn't explain the 4x jump from V3 -> V4.
### Issue Type
Bug Report
### Component Name
inventory
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CALLBACKS_ENABLED(/usr/share/ansible/ansible.cfg) = ['syslog', 'timer', 'yaml']
COLLECTIONS_PATHS(/usr/share/ansible/ansible.cfg) = ['/usr/lib/python3.8/site-packages/ansible_collections']
COLOR_VERBOSE(/usr/share/ansible/ansible.cfg) = bright blue
DEFAULT_ACTION_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/action']
DEFAULT_CALLBACK_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/callback']
DEFAULT_FILTER_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/filter']
DEFAULT_GATHERING(/usr/share/ansible/ansible.cfg) = smart
DEFAULT_HOST_LIST(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/inventory/consort.yml']
DEFAULT_INVENTORY_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/inventory']
DEFAULT_LOAD_CALLBACK_PLUGINS(/usr/share/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/usr/share/ansible/ansible.cfg) = /root/.ansible/logs/ansible.log
DEFAULT_LOOKUP_PLUGIN_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/plugins/lookup']
DEFAULT_MODULE_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/library']
DEFAULT_MODULE_UTILS_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/module_utils']
DEFAULT_ROLES_PATH(/usr/share/ansible/ansible.cfg) = ['/usr/share/ansible/roles']
DEFAULT_STDOUT_CALLBACK(/usr/share/ansible/ansible.cfg) = yaml
DEFAULT_TIMEOUT(/usr/share/ansible/ansible.cfg) = 15
DEFAULT_VAULT_PASSWORD_FILE(env: ANSIBLE_VAULT_PASSWORD_FILE) = /root/.ansible/.vault_pass.key
INVENTORY_CACHE_PLUGIN(/usr/share/ansible/ansible.cfg) = jsonfile
INVENTORY_CACHE_PLUGIN_CONNECTION(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
RETRY_FILES_ENABLED(/usr/share/ansible/ansible.cfg) = False
TRANSFORM_INVALID_GROUP_CHARS(/usr/share/ansible/ansible.cfg) = ignore
CONNECTION:
==========
ssh:
___
pipelining(/usr/share/ansible/ansible.cfg) = True
ssh_args(/usr/share/ansible/ansible.cfg) = -C -o ServerAliveInterval=5 -o ServerAliveCountMax=2 -o ControlMaster=auto -o ControlPersist=120s -o PreferredAuthentications=publickey,password -o UserKnownHostsFile=/dev/null -o StrictHostKe
timeout(/usr/share/ansible/ansible.cfg) = 15
INVENTORY:
=========
consort:
_______
cache_connection(/usr/share/ansible/ansible.cfg) = /root/.ansible/cache
cache_plugin(/usr/share/ansible/ansible.cfg) = jsonfile
```
### OS / Environment
Centos 7 Docker container
### Steps to Reproduce
Example timings from the hosts
```
ansible 3.4.0
ansible-base 2.10.17
Averages around 7 seconds to process the inventory and start tasks
# time ansible localhost -vvvv --list-hosts
ansible 2.10.17
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
Using /usr/share/ansible/ansible.cfg as config file
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3337ms
inventory/consort parse took 4046ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m12.943s
user 0m10.350s
sys 0m1.612s
#######
ansible 4.10.0
ansible-core 2.11.12
# time ansible localhost -vvvv --list-hosts
ansible [core 2.11.12]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
inventory/consort _populate took 3284ms
inventory/consort parse took 3972ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m36.764s
user 0m34.052s
sys 0m1.802s
########
ansible 6.4.0
ansible-core 2.13.4
# time ansible localhost -vvvv --list-hosts
ansible [core 2.13.4]
config file = /usr/share/ansible/ansible.cfg
configured module search path = ['/usr/share/ansible/library']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
ansible collection location = /usr/lib/python3.8/site-packages/ansible_collections
executable location = /usr/bin/ansible
python version = 3.8.13 (default, Apr 28 2022, 12:34:52) [GCC 8.3.1 20190311 (Red Hat 8.3.1-3)]
jinja version = 3.1.2
libyaml = True
Using /usr/share/ansible/ansible.cfg as config file
[DEPRECATION WARNING]: [defaults]callback_whitelist option, normalizing names to new standard, use callbacks_enabled instead. This feature will be removed from ansible-core in version 2.15. Deprecation warnings can be disabled by
setting deprecation_warnings=False in ansible.cfg.
setting up inventory plugins
host_list declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
script declined parsing /usr/share/ansible/inventory/consort.yml as it did not pass its verify_file() method
Using inventory plugin 'consort' to process inventory source '/usr/share/ansible/inventory/consort.yml'
inventory/consort _populate took 3272ms
inventory/consort parse took 3902ms
Parsed /usr/share/ansible/inventory/consort.yml inventory source with auto plugin
hosts (1):
localhost
real 0m39.021s
user 0m35.999s
sys 0m1.989s
```
### Expected Results
Less of an increase in the inventory parsing between the versions.
### Actual Results
```console
Long startup times processing the inventory data on every command.
Possibly not related but we did see a behaviour changing during the post populate stage where ansible was doing a lot of calls to `host_group_vars`
2026 1662134089.85213: Loading VarsModule 'host_group_vars' from /usr/lib/python3.8/site-packages/ansible/plugins/vars/host_group_vars.py (found_in_cache=True, class_only=False)
2026 1662134089.86151: Loading ModuleDocFragment 'vars_plugin_staging' from /usr/lib/python3.8/site-packages/ansible/plugins/doc_fragments/vars_plugin_staging.py (found_in_cache=True, class_only=False)
2026 1662134089.86523: Loaded config def from plugin (vars/host_group_vars)
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78795
|
https://github.com/ansible/ansible/pull/78859
|
71adb02142dbbd8f3c083ab2921f0d4b651edf64
|
4115ddd135a0445092c9f9a7b5904942ceedd57c
| 2022-09-16T15:14:09Z |
python
| 2022-09-27T15:34:59Z |
lib/ansible/plugins/loader.py
|
# (c) 2012, Daniel Hokka Zakrisson <[email protected]>
# (c) 2012-2014, Michael DeHaan <[email protected]> and others
# (c) 2017, Toshio Kuratomi <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import glob
import os
import os.path
import pkgutil
import sys
import warnings
from collections import defaultdict, namedtuple
from traceback import format_exc
from ansible import __version__ as ansible_version
from ansible import constants as C
from ansible.errors import AnsibleError, AnsiblePluginCircularRedirect, AnsiblePluginRemovedError, AnsibleCollectionUnsupportedVersionError
from ansible.module_utils._text import to_bytes, to_text, to_native
from ansible.module_utils.compat.importlib import import_module
from ansible.module_utils.six import string_types
from ansible.parsing.utils.yaml import from_yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins import get_plugin_class, MODULE_CACHE, PATH_CACHE, PLUGIN_PATH_CACHE
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.collection_loader._collection_finder import _AnsibleCollectionFinder, _get_collection_metadata
from ansible.utils.display import Display
from ansible.utils.plugin_docs import add_fragments, find_plugin_docfile
# TODO: take the packaging dep, or vendor SpecifierSet?
try:
from packaging.specifiers import SpecifierSet
from packaging.version import Version
except ImportError:
SpecifierSet = None # type: ignore[misc]
Version = None # type: ignore[misc]
import importlib.util
display = Display()
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
def get_all_plugin_loaders():
return [(name, obj) for (name, obj) in globals().items() if isinstance(obj, PluginLoader)]
def add_all_plugin_dirs(path):
''' add any existing plugin dirs in the path provided '''
b_path = os.path.expanduser(to_bytes(path, errors='surrogate_or_strict'))
if os.path.isdir(b_path):
for name, obj in get_all_plugin_loaders():
if obj.subdir:
plugin_path = os.path.join(b_path, to_bytes(obj.subdir))
if os.path.isdir(plugin_path):
obj.add_directory(to_text(plugin_path))
else:
display.warning("Ignoring invalid path provided to plugin path: '%s' is not a directory" % to_text(path))
def get_shell_plugin(shell_type=None, executable=None):
if not shell_type:
# default to sh
shell_type = 'sh'
# mostly for backwards compat
if executable:
if isinstance(executable, string_types):
shell_filename = os.path.basename(executable)
try:
shell = shell_loader.get(shell_filename)
except Exception:
shell = None
if shell is None:
for shell in shell_loader.all():
if shell_filename in shell.COMPATIBLE_SHELLS:
shell_type = shell.SHELL_FAMILY
break
else:
raise AnsibleError("Either a shell type or a shell executable must be provided ")
shell = shell_loader.get(shell_type)
if not shell:
raise AnsibleError("Could not find the shell plugin required (%s)." % shell_type)
if executable:
setattr(shell, 'executable', executable)
return shell
def add_dirs_to_loader(which_loader, paths):
loader = getattr(sys.modules[__name__], '%s_loader' % which_loader)
for path in paths:
loader.add_directory(path, with_subdir=True)
class PluginPathContext(object):
def __init__(self, path, internal):
self.path = path
self.internal = internal
class PluginLoadContext(object):
def __init__(self):
self.original_name = None
self.redirect_list = []
self.error_list = []
self.import_error_list = []
self.load_attempts = []
self.pending_redirect = None
self.exit_reason = None
self.plugin_resolved_path = None
self.plugin_resolved_name = None
self.plugin_resolved_collection = None # empty string for resolved plugins from user-supplied paths
self.deprecated = False
self.removal_date = None
self.removal_version = None
self.deprecation_warnings = []
self.resolved = False
self._resolved_fqcn = None
self.action_plugin = None
@property
def resolved_fqcn(self):
if not self.resolved:
return
if not self._resolved_fqcn:
final_plugin = self.redirect_list[-1]
if AnsibleCollectionRef.is_valid_fqcr(final_plugin) and final_plugin.startswith('ansible.legacy.'):
final_plugin = final_plugin.split('ansible.legacy.')[-1]
if self.plugin_resolved_collection and not AnsibleCollectionRef.is_valid_fqcr(final_plugin):
final_plugin = self.plugin_resolved_collection + '.' + final_plugin
self._resolved_fqcn = final_plugin
return self._resolved_fqcn
def record_deprecation(self, name, deprecation, collection_name):
if not deprecation:
return self
# The `or ''` instead of using `.get(..., '')` makes sure that even if the user explicitly
# sets `warning_text` to `~` (None) or `false`, we still get an empty string.
warning_text = deprecation.get('warning_text', None) or ''
removal_date = deprecation.get('removal_date', None)
removal_version = deprecation.get('removal_version', None)
# If both removal_date and removal_version are specified, use removal_date
if removal_date is not None:
removal_version = None
warning_text = '{0} has been deprecated.{1}{2}'.format(name, ' ' if warning_text else '', warning_text)
display.deprecated(warning_text, date=removal_date, version=removal_version, collection_name=collection_name)
self.deprecated = True
if removal_date:
self.removal_date = removal_date
if removal_version:
self.removal_version = removal_version
self.deprecation_warnings.append(warning_text)
return self
def resolve(self, resolved_name, resolved_path, resolved_collection, exit_reason, action_plugin):
self.pending_redirect = None
self.plugin_resolved_name = resolved_name
self.plugin_resolved_path = resolved_path
self.plugin_resolved_collection = resolved_collection
self.exit_reason = exit_reason
self.resolved = True
self.action_plugin = action_plugin
return self
def redirect(self, redirect_name):
self.pending_redirect = redirect_name
self.exit_reason = 'pending redirect resolution from {0} to {1}'.format(self.original_name, redirect_name)
self.resolved = False
return self
def nope(self, exit_reason):
self.pending_redirect = None
self.exit_reason = exit_reason
self.resolved = False
return self
class PluginLoader:
'''
PluginLoader loads plugins from the configured plugin directories.
It searches for plugins by iterating through the combined list of play basedirs, configured
paths, and the python path. The first match is used.
'''
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
aliases = {} if aliases is None else aliases
self.class_name = class_name
self.base_class = required_base_class
self.package = package
self.subdir = subdir
# FIXME: remove alias dict in favor of alias by symlink?
self.aliases = aliases
if config and not isinstance(config, list):
config = [config]
elif not config:
config = []
self.config = config
if class_name not in MODULE_CACHE:
MODULE_CACHE[class_name] = {}
if class_name not in PATH_CACHE:
PATH_CACHE[class_name] = None
if class_name not in PLUGIN_PATH_CACHE:
PLUGIN_PATH_CACHE[class_name] = defaultdict(dict)
# hold dirs added at runtime outside of config
self._extra_dirs = []
# caches
self._module_cache = MODULE_CACHE[class_name]
self._paths = PATH_CACHE[class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[class_name]
self._searched_paths = set()
@property
def type(self):
return AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
def __repr__(self):
return 'PluginLoader(type={0})'.format(self.type)
def _clear_caches(self):
if C.OLD_PLUGIN_CACHE_CLEARING:
self._paths = None
else:
# reset global caches
MODULE_CACHE[self.class_name] = {}
PATH_CACHE[self.class_name] = None
PLUGIN_PATH_CACHE[self.class_name] = defaultdict(dict)
# reset internal caches
self._module_cache = MODULE_CACHE[self.class_name]
self._paths = PATH_CACHE[self.class_name]
self._plugin_path_cache = PLUGIN_PATH_CACHE[self.class_name]
self._searched_paths = set()
def __setstate__(self, data):
'''
Deserializer.
'''
class_name = data.get('class_name')
package = data.get('package')
config = data.get('config')
subdir = data.get('subdir')
aliases = data.get('aliases')
base_class = data.get('base_class')
PATH_CACHE[class_name] = data.get('PATH_CACHE')
PLUGIN_PATH_CACHE[class_name] = data.get('PLUGIN_PATH_CACHE')
self.__init__(class_name, package, config, subdir, aliases, base_class)
self._extra_dirs = data.get('_extra_dirs', [])
self._searched_paths = data.get('_searched_paths', set())
def __getstate__(self):
'''
Serializer.
'''
return dict(
class_name=self.class_name,
base_class=self.base_class,
package=self.package,
config=self.config,
subdir=self.subdir,
aliases=self.aliases,
_extra_dirs=self._extra_dirs,
_searched_paths=self._searched_paths,
PATH_CACHE=PATH_CACHE[self.class_name],
PLUGIN_PATH_CACHE=PLUGIN_PATH_CACHE[self.class_name],
)
def format_paths(self, paths):
''' Returns a string suitable for printing of the search path '''
# Uses a list to get the order right
ret = []
for i in paths:
if i not in ret:
ret.append(i)
return os.pathsep.join(ret)
def print_paths(self):
return self.format_paths(self._get_paths(subdirs=False))
def _all_directories(self, dir):
results = []
results.append(dir)
for root, subdirs, files in os.walk(dir, followlinks=True):
if '__init__.py' in files:
for x in subdirs:
results.append(os.path.join(root, x))
return results
def _get_package_paths(self, subdirs=True):
''' Gets the path of a Python package '''
if not self.package:
return []
if not hasattr(self, 'package_path'):
m = __import__(self.package)
parts = self.package.split('.')[1:]
for parent_mod in parts:
m = getattr(m, parent_mod)
self.package_path = to_text(os.path.dirname(m.__file__), errors='surrogate_or_strict')
if subdirs:
return self._all_directories(self.package_path)
return [self.package_path]
def _get_paths_with_context(self, subdirs=True):
''' Return a list of PluginPathContext objects to search for plugins in '''
# FIXME: This is potentially buggy if subdirs is sometimes True and sometimes False.
# In current usage, everything calls this with subdirs=True except for module_utils_loader and ansible-doc
# which always calls it with subdirs=False. So there currently isn't a problem with this caching.
if self._paths is not None:
return self._paths
ret = [PluginPathContext(p, False) for p in self._extra_dirs]
# look in any configured plugin paths, allow one level deep for subcategories
if self.config is not None:
for path in self.config:
path = os.path.abspath(os.path.expanduser(path))
if subdirs:
contents = glob.glob("%s/*" % path) + glob.glob("%s/*/*" % path)
for c in contents:
c = to_text(c, errors='surrogate_or_strict')
if os.path.isdir(c) and c not in ret:
ret.append(PluginPathContext(c, False))
path = to_text(path, errors='surrogate_or_strict')
if path not in ret:
ret.append(PluginPathContext(path, False))
# look for any plugins installed in the package subtree
# Note package path always gets added last so that every other type of
# path is searched before it.
ret.extend([PluginPathContext(p, True) for p in self._get_package_paths(subdirs=subdirs)])
# HACK: because powershell modules are in the same directory
# hierarchy as other modules we have to process them last. This is
# because powershell only works on windows but the other modules work
# anywhere (possibly including windows if the correct language
# interpreter is installed). the non-powershell modules can have any
# file extension and thus powershell modules are picked up in that.
# The non-hack way to fix this is to have powershell modules be
# a different PluginLoader/ModuleLoader. But that requires changing
# other things too (known thing to change would be PATHS_CACHE,
# PLUGIN_PATHS_CACHE, and MODULE_CACHE. Since those three dicts key
# on the class_name and neither regular modules nor powershell modules
# would have class_names, they would not work as written.
#
# The expected sort order is paths in the order in 'ret' with paths ending in '/windows' at the end,
# also in the original order they were found in 'ret'.
# The .sort() method is guaranteed to be stable, so original order is preserved.
ret.sort(key=lambda p: p.path.endswith('/windows'))
# cache and return the result
self._paths = ret
return ret
def _get_paths(self, subdirs=True):
''' Return a list of paths to search for plugins in '''
paths_with_context = self._get_paths_with_context(subdirs=subdirs)
return [path_with_context.path for path_with_context in paths_with_context]
def _load_config_defs(self, name, module, path):
''' Reads plugin docs to find configuration setting definitions, to push to config manager for later use '''
# plugins w/o class name don't support config
if self.class_name:
type_name = get_plugin_class(self.class_name)
# if type name != 'module_doc_fragment':
if type_name in C.CONFIGURABLE_PLUGINS and not C.config.get_configuration_definition(type_name, name):
dstring = AnsibleLoader(getattr(module, 'DOCUMENTATION', ''), file_name=path).get_single_data()
# TODO: allow configurable plugins to use sidecar
# if not dstring:
# filename, cn = find_plugin_docfile( name, type_name, self, [os.path.dirname(path)], C.YAML_DOC_EXTENSIONS)
# # TODO: dstring = AnsibleLoader(, file_name=path).get_single_data()
if dstring:
add_fragments(dstring, path, fragment_loader=fragment_loader, is_module=(type_name == 'module'))
if 'options' in dstring and isinstance(dstring['options'], dict):
C.config.initialize_plugin_configuration_definitions(type_name, name, dstring['options'])
display.debug('Loaded config def from plugin (%s/%s)' % (type_name, name))
def add_directory(self, directory, with_subdir=False):
''' Adds an additional directory to the search path '''
directory = os.path.realpath(directory)
if directory is not None:
if with_subdir:
directory = os.path.join(directory, self.subdir)
if directory not in self._extra_dirs:
# append the directory and invalidate the path cache
self._extra_dirs.append(directory)
self._clear_caches()
display.debug('Added %s to loader search path' % (directory))
def _query_collection_routing_meta(self, acr, plugin_type, extension=None):
collection_pkg = import_module(acr.n_python_collection_package_name)
if not collection_pkg:
return None
# FIXME: shouldn't need this...
try:
# force any type-specific metadata postprocessing to occur
import_module(acr.n_python_collection_package_name + '.plugins.{0}'.format(plugin_type))
except ImportError:
pass
# this will be created by the collection PEP302 loader
collection_meta = getattr(collection_pkg, '_collection_meta', None)
if not collection_meta:
return None
# TODO: add subdirs support
# check for extension-specific entry first (eg 'setup.ps1')
# TODO: str/bytes on extension/name munging
if acr.subdirs:
subdir_qualified_resource = '.'.join([acr.subdirs, acr.resource])
else:
subdir_qualified_resource = acr.resource
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource + extension, None)
if not entry:
# try for extension-agnostic entry
entry = collection_meta.get('plugin_routing', {}).get(plugin_type, {}).get(subdir_qualified_resource, None)
return entry
def _find_fq_plugin(self, fq_name, extension, plugin_load_context, ignore_deprecated=False):
"""Search builtin paths to find a plugin. No external paths are searched,
meaning plugins inside roles inside collections will be ignored.
"""
plugin_load_context.resolved = False
plugin_type = AnsibleCollectionRef.legacy_plugin_dir_to_plugin_type(self.subdir)
acr = AnsibleCollectionRef.from_fqcr(fq_name, plugin_type)
# check collection metadata to see if any special handling is required for this plugin
routing_metadata = self._query_collection_routing_meta(acr, plugin_type, extension=extension)
action_plugin = None
# TODO: factor this into a wrapper method
if routing_metadata:
deprecation = routing_metadata.get('deprecation', None)
# this will no-op if there's no deprecation metadata for this plugin
if not ignore_deprecated:
plugin_load_context.record_deprecation(fq_name, deprecation, acr.collection)
tombstone = routing_metadata.get('tombstone', None)
# FIXME: clean up text gen
if tombstone:
removal_date = tombstone.get('removal_date')
removal_version = tombstone.get('removal_version')
warning_text = tombstone.get('warning_text') or ''
warning_text = '{0} has been removed.{1}{2}'.format(fq_name, ' ' if warning_text else '', warning_text)
removed_msg = display.get_deprecation_message(msg=warning_text, version=removal_version,
date=removal_date, removed=True,
collection_name=acr.collection)
plugin_load_context.removal_date = removal_date
plugin_load_context.removal_version = removal_version
plugin_load_context.resolved = True
plugin_load_context.exit_reason = removed_msg
raise AnsiblePluginRemovedError(removed_msg, plugin_load_context=plugin_load_context)
redirect = routing_metadata.get('redirect', None)
if redirect:
# Prevent mystery redirects that would be determined by the collections keyword
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {fq_name}: {redirect}. "
"Redirects must use fully qualified collection names."
)
# FIXME: remove once this is covered in debug or whatever
display.vv("redirecting (type: {0}) {1} to {2}".format(plugin_type, fq_name, redirect))
# The name doing the redirection is added at the beginning of _resolve_plugin_step,
# but if the unqualified name is used in conjunction with the collections keyword, only
# the unqualified name is in the redirect list.
if fq_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(fq_name)
return plugin_load_context.redirect(redirect)
# TODO: non-FQCN case, do we support `.` prefix for current collection, assume it with no dots, require it for subdirs in current, or ?
if self.type == 'modules':
action_plugin = routing_metadata.get('action_plugin')
n_resource = to_native(acr.resource, errors='strict')
# we want this before the extension is added
full_name = '{0}.{1}'.format(acr.n_python_package_name, n_resource)
if extension:
n_resource += extension
pkg = sys.modules.get(acr.n_python_package_name)
if not pkg:
# FIXME: there must be cheaper/safer way to do this
try:
pkg = import_module(acr.n_python_package_name)
except ImportError:
return plugin_load_context.nope('Python package {0} not found'.format(acr.n_python_package_name))
pkg_path = os.path.dirname(pkg.__file__)
n_resource_path = os.path.join(pkg_path, n_resource)
# FIXME: and is file or file link or ...
if os.path.exists(n_resource_path):
return plugin_load_context.resolve(
full_name, to_text(n_resource_path), acr.collection, 'found exact match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
if extension:
# the request was extension-specific, don't try for an extensionless match
return plugin_load_context.nope('no match for {0} in {1}'.format(to_text(n_resource), acr.collection))
# look for any matching extension in the package location (sans filter)
found_files = [f
for f in glob.iglob(os.path.join(pkg_path, n_resource) + '.*')
if os.path.isfile(f) and not f.endswith(C.MODULE_IGNORE_EXTS)]
if not found_files:
return plugin_load_context.nope('failed fuzzy extension match for {0} in {1}'.format(full_name, acr.collection))
found_files = sorted(found_files) # sort to ensure deterministic results, with the shortest match first
if len(found_files) > 1:
display.debug('Found several possible candidates for the plugin but using first: %s' % ','.join(found_files))
return plugin_load_context.resolve(
full_name, to_text(found_files[0]), acr.collection,
'found fuzzy extension match for {0} in {1}'.format(full_name, acr.collection), action_plugin)
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name '''
result = self.find_plugin_with_context(name, mod_type, ignore_deprecated, check_aliases, collection_list)
if result.resolved and result.plugin_resolved_path:
return result.plugin_resolved_path
return None
def find_plugin_with_context(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
''' Find a plugin named name, returning contextual info about the load, recursively resolving redirection '''
plugin_load_context = PluginLoadContext()
plugin_load_context.original_name = name
while True:
result = self._resolve_plugin_step(name, mod_type, ignore_deprecated, check_aliases, collection_list, plugin_load_context=plugin_load_context)
if result.pending_redirect:
if result.pending_redirect in result.redirect_list:
raise AnsiblePluginCircularRedirect('plugin redirect loop resolving {0} (path: {1})'.format(result.original_name, result.redirect_list))
name = result.pending_redirect
result.pending_redirect = None
plugin_load_context = result
else:
break
# TODO: smuggle these to the controller when we're in a worker, reduce noise from normal things like missing plugin packages during collection search
if plugin_load_context.error_list:
display.warning("errors were encountered during the plugin load for {0}:\n{1}".format(name, plugin_load_context.error_list))
# TODO: display/return import_error_list? Only useful for forensics...
# FIXME: store structured deprecation data in PluginLoadContext and use display.deprecate
# if plugin_load_context.deprecated and C.config.get_config_value('DEPRECATION_WARNINGS'):
# for dw in plugin_load_context.deprecation_warnings:
# # TODO: need to smuggle these to the controller if we're in a worker context
# display.warning('[DEPRECATION WARNING] ' + dw)
return plugin_load_context
# FIXME: name bikeshed
def _resolve_plugin_step(self, name, mod_type='', ignore_deprecated=False,
check_aliases=False, collection_list=None, plugin_load_context=PluginLoadContext()):
if not plugin_load_context:
raise ValueError('A PluginLoadContext is required')
plugin_load_context.redirect_list.append(name)
plugin_load_context.resolved = False
if name in _PLUGIN_FILTERS[self.package]:
plugin_load_context.exit_reason = '{0} matched a defined plugin filter'.format(name)
return plugin_load_context
if mod_type:
suffix = mod_type
elif self.class_name:
# Ansible plugins that run in the controller process (most plugins)
suffix = '.py'
else:
# Only Ansible Modules. Ansible modules can be any executable so
# they can have any suffix
suffix = ''
# FIXME: need this right now so we can still load shipped PS module_utils- come up with a more robust solution
if (AnsibleCollectionRef.is_valid_fqcr(name) or collection_list) and not name.startswith('Ansible'):
if '.' in name or not collection_list:
candidates = [name]
else:
candidates = ['{0}.{1}'.format(c, name) for c in collection_list]
for candidate_name in candidates:
try:
plugin_load_context.load_attempts.append(candidate_name)
# HACK: refactor this properly
if candidate_name.startswith('ansible.legacy'):
# 'ansible.legacy' refers to the plugin finding behavior used before collections existed.
# They need to search 'library' and the various '*_plugins' directories in order to find the file.
plugin_load_context = self._find_plugin_legacy(name.removeprefix('ansible.legacy.'),
plugin_load_context, ignore_deprecated, check_aliases, suffix)
else:
# 'ansible.builtin' should be handled here. This means only internal, or builtin, paths are searched.
plugin_load_context = self._find_fq_plugin(candidate_name, suffix, plugin_load_context=plugin_load_context,
ignore_deprecated=ignore_deprecated)
# Pending redirects are added to the redirect_list at the beginning of _resolve_plugin_step.
# Once redirects are resolved, ensure the final FQCN is added here.
# e.g. 'ns.coll.module' is included rather than only 'module' if a collections list is provided:
# - module:
# collections: ['ns.coll']
if plugin_load_context.resolved and candidate_name not in plugin_load_context.redirect_list:
plugin_load_context.redirect_list.append(candidate_name)
if plugin_load_context.resolved or plugin_load_context.pending_redirect: # if we got an answer or need to chase down a redirect, return
return plugin_load_context
except (AnsiblePluginRemovedError, AnsiblePluginCircularRedirect, AnsibleCollectionUnsupportedVersionError):
# these are generally fatal, let them fly
raise
except ImportError as ie:
plugin_load_context.import_error_list.append(ie)
except Exception as ex:
# FIXME: keep actual errors, not just assembled messages
plugin_load_context.error_list.append(to_native(ex))
if plugin_load_context.error_list:
display.debug(msg='plugin lookup for {0} failed; errors: {1}'.format(name, '; '.join(plugin_load_context.error_list)))
plugin_load_context.exit_reason = 'no matches found for {0}'.format(name)
return plugin_load_context
# if we got here, there's no collection list and it's not an FQ name, so do legacy lookup
return self._find_plugin_legacy(name, plugin_load_context, ignore_deprecated, check_aliases, suffix)
def _find_plugin_legacy(self, name, plugin_load_context, ignore_deprecated=False, check_aliases=False, suffix=None):
"""Search library and various *_plugins paths in order to find the file.
This was behavior prior to the existence of collections.
"""
plugin_load_context.resolved = False
if check_aliases:
name = self.aliases.get(name, name)
# The particular cache to look for modules within. This matches the
# requested mod_type
pull_cache = self._plugin_path_cache[suffix]
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = ('ansible.builtin.' + name if path_with_context.internal else name)
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Cache miss. Now let's find the plugin
pass
# TODO: Instead of using the self._paths cache (PATH_CACHE) and
# self._searched_paths we could use an iterator. Before enabling that
# we need to make sure we don't want to add additional directories
# (add_directory()) once we start using the iterator.
# We can use _get_paths_with_context() since add_directory() forces a cache refresh.
for path_with_context in (p for p in self._get_paths_with_context() if p.path not in self._searched_paths and os.path.isdir(to_bytes(p.path))):
path = path_with_context.path
b_path = to_bytes(path)
display.debug('trying %s' % path)
plugin_load_context.load_attempts.append(path)
internal = path_with_context.internal
try:
full_paths = (os.path.join(b_path, f) for f in os.listdir(b_path))
except OSError as e:
display.warning("Error accessing plugin paths: %s" % to_text(e))
for full_path in (to_native(f) for f in full_paths if os.path.isfile(f) and not f.endswith(b'__init__.py')):
full_name = os.path.basename(full_path)
# HACK: We have no way of executing python byte compiled files as ansible modules so specifically exclude them
# FIXME: I believe this is only correct for modules and module_utils.
# For all other plugins we want .pyc and .pyo should be valid
if any(full_path.endswith(x) for x in C.MODULE_IGNORE_EXTS):
continue
splitname = os.path.splitext(full_name)
base_name = splitname[0]
try:
extension = splitname[1]
except IndexError:
extension = ''
# everything downstream expects unicode
full_path = to_text(full_path, errors='surrogate_or_strict')
# Module found, now enter it into the caches that match this file
if base_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache['']:
self._plugin_path_cache[''][full_name] = PluginPathContext(full_path, internal)
if base_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][base_name] = PluginPathContext(full_path, internal)
if full_name not in self._plugin_path_cache[extension]:
self._plugin_path_cache[extension][full_name] = PluginPathContext(full_path, internal)
self._searched_paths.add(path)
try:
path_with_context = pull_cache[name]
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + name if path_with_context.internal else name
plugin_load_context.resolved = True
return plugin_load_context
except KeyError:
# Didn't find the plugin in this directory. Load modules from the next one
pass
# if nothing is found, try finding alias/deprecated
if not name.startswith('_'):
alias_name = '_' + name
# We've already cached all the paths at this point
if alias_name in pull_cache:
path_with_context = pull_cache[alias_name]
if not ignore_deprecated and not os.path.islink(path_with_context.path):
# FIXME: this is not always the case, some are just aliases
display.deprecated('%s is kept for backwards compatibility but usage is discouraged. ' # pylint: disable=ansible-deprecated-no-version
'The module documentation details page may explain more about this rationale.' % name.lstrip('_'))
plugin_load_context.plugin_resolved_path = path_with_context.path
plugin_load_context.plugin_resolved_name = alias_name
plugin_load_context.plugin_resolved_collection = 'ansible.builtin' if path_with_context.internal else ''
plugin_load_context._resolved_fqcn = 'ansible.builtin.' + alias_name if path_with_context.internal else alias_name
plugin_load_context.resolved = True
return plugin_load_context
# last ditch, if it's something that can be redirected, look for a builtin redirect before giving up
candidate_fqcr = 'ansible.builtin.{0}'.format(name)
if '.' not in name and AnsibleCollectionRef.is_valid_fqcr(candidate_fqcr):
return self._find_fq_plugin(fq_name=candidate_fqcr, extension=suffix, plugin_load_context=plugin_load_context, ignore_deprecated=ignore_deprecated)
return plugin_load_context.nope('{0} is not eligible for last-chance resolution'.format(name))
def has_plugin(self, name, collection_list=None):
''' Checks if a plugin named name exists '''
try:
return self.find_plugin(name, collection_list=collection_list) is not None
except Exception as ex:
if isinstance(ex, AnsibleError):
raise
# log and continue, likely an innocuous type/package loading failure in collections import
display.debug('has_plugin error: {0}'.format(to_text(ex)))
__contains__ = has_plugin
def _load_module_source(self, name, path):
# avoid collisions across plugins
if name.startswith('ansible_collections.'):
full_name = name
else:
full_name = '.'.join([self.package, name])
if full_name in sys.modules:
# Avoids double loading, See https://github.com/ansible/ansible/issues/13110
return sys.modules[full_name]
with warnings.catch_warnings():
# FIXME: this still has issues if the module was previously imported but not "cached",
# we should bypass this entire codepath for things that are directly importable
warnings.simplefilter("ignore", RuntimeWarning)
spec = importlib.util.spec_from_file_location(to_native(full_name), to_native(path))
module = importlib.util.module_from_spec(spec)
# mimic import machinery; make the module-being-loaded available in sys.modules during import
# and remove if there's a failure...
sys.modules[full_name] = module
try:
spec.loader.exec_module(module)
except Exception:
del sys.modules[full_name]
raise
return module
def _update_object(self, obj, name, path, redirected_names=None, resolved=None):
# set extra info on the module, in case we want it later
setattr(obj, '_original_path', path)
setattr(obj, '_load_name', name)
setattr(obj, '_redirected_names', redirected_names or [])
names = []
if resolved:
names.append(resolved)
if redirected_names:
# reverse list so best name comes first
names.extend(redirected_names[::-1])
if not names:
raise AnsibleError(f"Missing FQCN for plugin source {name}")
setattr(obj, 'ansible_aliases', names)
setattr(obj, 'ansible_name', names[0])
def get(self, name, *args, **kwargs):
return self.get_with_context(name, *args, **kwargs).object
def get_with_context(self, name, *args, **kwargs):
''' instantiates a plugin of the given name using arguments '''
found_in_cache = True
class_only = kwargs.pop('class_only', False)
collection_list = kwargs.pop('collection_list', None)
if name in self.aliases:
name = self.aliases[name]
plugin_load_context = self.find_plugin_with_context(name, collection_list=collection_list)
if not plugin_load_context.resolved or not plugin_load_context.plugin_resolved_path:
# FIXME: this is probably an error (eg removed plugin)
return get_with_context_result(None, plugin_load_context)
fq_name = plugin_load_context.resolved_fqcn
if '.' not in fq_name:
fq_name = '.'.join((plugin_load_context.plugin_resolved_collection, fq_name))
name = plugin_load_context.plugin_resolved_name
path = plugin_load_context.plugin_resolved_path
redirected_names = plugin_load_context.redirect_list or []
if path not in self._module_cache:
self._module_cache[path] = self._load_module_source(name, path)
found_in_cache = False
self._load_config_defs(name, self._module_cache[path], path)
obj = getattr(self._module_cache[path], self.class_name)
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
return get_with_context_result(None, plugin_load_context)
if not issubclass(obj, plugin_class):
return get_with_context_result(None, plugin_load_context)
# FIXME: update this to use the load context
self._display_plugin_load(self.class_name, name, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
# A plugin may need to use its _load_name in __init__ (for example, to set
# or get options from config), so update the object before using the constructor
instance = object.__new__(obj)
self._update_object(instance, name, path, redirected_names, fq_name)
obj.__init__(instance, *args, **kwargs) # pylint: disable=unnecessary-dunder-call
obj = instance
except TypeError as e:
if "abstract" in e.args[0]:
# Abstract Base Class or incomplete plugin, don't load
display.v('Returning not found on "%s" as it has unimplemented abstract methods; %s' % (name, to_native(e)))
return get_with_context_result(None, plugin_load_context)
raise
self._update_object(obj, name, path, redirected_names, fq_name)
return get_with_context_result(obj, plugin_load_context)
def _display_plugin_load(self, class_name, name, searched_paths, path, found_in_cache=None, class_only=None):
''' formats data to display debug info for plugin loading, also avoids processing unless really needed '''
if C.DEFAULT_DEBUG:
msg = 'Loading %s \'%s\' from %s' % (class_name, os.path.basename(name), path)
if len(searched_paths) > 1:
msg = '%s (searched paths: %s)' % (msg, self.format_paths(searched_paths))
if found_in_cache or class_only:
msg = '%s (found_in_cache=%s, class_only=%s)' % (msg, found_in_cache, class_only)
display.debug(msg)
def all(self, *args, **kwargs):
'''
Iterate through all plugins of this type, in configured paths (no collections)
A plugin loader is initialized with a specific type. This function is an iterator returning
all of the plugins of that type to the caller.
:kwarg path_only: If this is set to True, then we return the paths to where the plugins reside
instead of an instance of the plugin. This conflicts with class_only and both should
not be set.
:kwarg class_only: If this is set to True then we return the python class which implements
a plugin rather than an instance of the plugin. This conflicts with path_only and both
should not be set.
:kwarg _dedupe: By default, we only return one plugin per plugin name. Deduplication happens
in the same way as the :meth:`get` and :meth:`find_plugin` methods resolve which plugin
should take precedence. If this is set to False, then we return all of the plugins
found, including those with duplicate names. In the case of duplicates, the order in
which they are returned is the one that would take precedence first, followed by the
others in decreasing precedence order. This should only be used by subclasses which
want to manage their own deduplication of the plugins.
:*args: Any extra arguments are passed to each plugin when it is instantiated.
:**kwargs: Any extra keyword arguments are passed to each plugin when it is instantiated.
'''
# TODO: Change the signature of this method to:
# def all(return_type='instance', args=None, kwargs=None):
# if args is None: args = []
# if kwargs is None: kwargs = {}
# return_type can be instance, class, or path.
# These changes will mean that plugin parameters won't conflict with our params and
# will also make it impossible to request both a path and a class at the same time.
#
# Move _dedupe to be a class attribute, CUSTOM_DEDUPE, with subclasses for filters and
# tests setting it to True
dedupe = kwargs.pop('_dedupe', True)
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False)
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
all_matches = []
found_in_cache = True
legacy_excluding_builtin = set()
for path_with_context in self._get_paths_with_context():
matches = glob.glob(to_native(os.path.join(path_with_context.path, "*.py")))
if not path_with_context.internal:
legacy_excluding_builtin.update(matches)
# we sort within each path, but keep path precedence from config
all_matches.extend(sorted(matches, key=os.path.basename))
loaded_modules = set()
for path in all_matches:
name = os.path.splitext(path)[0]
basename = os.path.basename(name)
if basename in _PLUGIN_FILTERS[self.package]:
display.debug("'%s' skipped due to a defined plugin filter" % basename)
continue
if basename == '__init__' or (basename == 'base' and self.package == 'ansible.plugins.cache'):
# cache has legacy 'base.py' file, which is wrapper for __init__.py
display.debug("'%s' skipped due to reserved name" % basename)
continue
if dedupe and basename in loaded_modules:
display.debug("'%s' skipped as duplicate" % basename)
continue
loaded_modules.add(basename)
if path_only:
yield path
continue
if path not in self._module_cache:
if self.type in ('filter', 'test'):
# filter and test plugin files can contain multiple plugins
# they must have a unique python module name to prevent them from shadowing each other
full_name = '{0}_{1}'.format(abs(hash(path)), basename)
else:
full_name = basename
try:
module = self._load_module_source(full_name, path)
except Exception as e:
display.warning("Skipping plugin (%s), cannot load: %s" % (path, to_text(e)))
continue
self._module_cache[path] = module
found_in_cache = False
else:
module = self._module_cache[path]
self._load_config_defs(basename, module, path)
try:
obj = getattr(module, self.class_name)
except AttributeError as e:
display.warning("Skipping plugin (%s) as it seems to be invalid: %s" % (path, to_text(e)))
continue
if self.base_class:
# The import path is hardcoded and should be the right place,
# so we are not expecting an ImportError.
module = __import__(self.package, fromlist=[self.base_class])
# Check whether this obj has the required base class.
try:
plugin_class = getattr(module, self.base_class)
except AttributeError:
continue
if not issubclass(obj, plugin_class):
continue
self._display_plugin_load(self.class_name, basename, self._searched_paths, path, found_in_cache=found_in_cache, class_only=class_only)
if not class_only:
try:
obj = obj(*args, **kwargs)
except TypeError as e:
display.warning("Skipping plugin (%s) as it seems to be incomplete: %s" % (path, to_text(e)))
if path in legacy_excluding_builtin:
fqcn = basename
else:
fqcn = f"ansible.builtin.{basename}"
self._update_object(obj, basename, path, resolved=fqcn)
yield obj
class Jinja2Loader(PluginLoader):
"""
PluginLoader optimized for Jinja2 plugins
The filter and test plugins are Jinja2 plugins encapsulated inside of our plugin format.
We need to do a few things differently in the base class because of file == plugin
assumptions and dedupe logic.
"""
def __init__(self, class_name, package, config, subdir, aliases=None, required_base_class=None):
super(Jinja2Loader, self).__init__(class_name, package, config, subdir, aliases=aliases, required_base_class=required_base_class)
self._loaded_j2_file_maps = []
def _clear_caches(self):
super(Jinja2Loader, self)._clear_caches()
self._loaded_j2_file_maps = []
def find_plugin(self, name, mod_type='', ignore_deprecated=False, check_aliases=False, collection_list=None):
# TODO: handle collection plugin find, see 'get_with_context'
# this can really 'find plugin file'
plugin = super(Jinja2Loader, self).find_plugin(name, mod_type=mod_type, ignore_deprecated=ignore_deprecated, check_aliases=check_aliases,
collection_list=collection_list)
# if not found, try loading all non collection plugins and see if this in there
if not plugin:
all_plugins = self.all()
plugin = all_plugins.get(name, None)
return plugin
@property
def method_map_name(self):
return get_plugin_class(self.class_name) + 's'
def get_contained_plugins(self, collection, plugin_path, name):
plugins = []
full_name = '.'.join(['ansible_collections', collection, 'plugins', self.type, name])
try:
# use 'parent' loader class to find files, but cannot return this as it can contain multiple plugins per file
if plugin_path not in self._module_cache:
self._module_cache[plugin_path] = self._load_module_source(full_name, plugin_path)
module = self._module_cache[plugin_path]
obj = getattr(module, self.class_name)
except Exception as e:
raise KeyError('Failed to load %s for %s: %s' % (plugin_path, collection, to_native(e)))
plugin_impl = obj()
if plugin_impl is None:
raise KeyError('Could not find %s.%s' % (collection, name))
try:
method_map = getattr(plugin_impl, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Ignoring %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_path), e))
return plugins
for func_name, func in plugin_map:
fq_name = '.'.join((collection, func_name))
full = '.'.join((full_name, func_name))
pclass = self._load_jinja2_class()
plugin = pclass(func)
if plugin in plugins:
continue
self._update_object(plugin, full, plugin_path, resolved=fq_name)
plugins.append(plugin)
return plugins
def get_with_context(self, name, *args, **kwargs):
# found_in_cache = True
class_only = kwargs.pop('class_only', False) # just pop it, dont want to pass through
collection_list = kwargs.pop('collection_list', None)
context = PluginLoadContext()
# avoid collection path for legacy
name = name.removeprefix('ansible.legacy.')
if '.' not in name:
# Filter/tests must always be FQCN except builtin and legacy
for known_plugin in self.all(*args, **kwargs):
if known_plugin.matches_name([name]):
context.resolved = True
context.plugin_resolved_name = name
context.plugin_resolved_path = known_plugin._original_path
context.plugin_resolved_collection = 'ansible.builtin' if known_plugin.ansible_name.startswith('ansible.builtin.') else ''
context._resolved_fqcn = known_plugin.ansible_name
return get_with_context_result(known_plugin, context)
plugin = None
key, leaf_key = get_fqcr_and_name(name)
seen = set()
# follow the meta!
while True:
if key in seen:
raise AnsibleError('recursive collection redirect found for %r' % name, 0)
seen.add(key)
acr = AnsibleCollectionRef.try_parse_fqcr(key, self.type)
if not acr:
raise KeyError('invalid plugin name: {0}'.format(key))
try:
ts = _get_collection_metadata(acr.collection)
except ValueError as e:
# no collection
raise KeyError('Invalid plugin FQCN ({0}): {1}'.format(key, to_native(e)))
# TODO: implement cycle detection (unified across collection redir as well)
routing_entry = ts.get('plugin_routing', {}).get(self.type, {}).get(leaf_key, {})
# check deprecations
deprecation_entry = routing_entry.get('deprecation')
if deprecation_entry:
warning_text = deprecation_entry.get('warning_text')
removal_date = deprecation_entry.get('removal_date')
removal_version = deprecation_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" is deprecated'.format(self.type, key)
display.deprecated(warning_text, version=removal_version, date=removal_date, collection_name=acr.collection)
# check removal
tombstone_entry = routing_entry.get('tombstone')
if tombstone_entry:
warning_text = tombstone_entry.get('warning_text')
removal_date = tombstone_entry.get('removal_date')
removal_version = tombstone_entry.get('removal_version')
if not warning_text:
warning_text = '{0} "{1}" has been removed'.format(self.type, key)
exc_msg = display.get_deprecation_message(warning_text, version=removal_version, date=removal_date,
collection_name=acr.collection, removed=True)
raise AnsiblePluginRemovedError(exc_msg)
# check redirects
redirect = routing_entry.get('redirect', None)
if redirect:
if not AnsibleCollectionRef.is_valid_fqcr(redirect):
raise AnsibleError(
f"Collection {acr.collection} contains invalid redirect for {acr.collection}.{acr.resource}: {redirect}. "
"Redirects must use fully qualified collection names."
)
next_key, leaf_key = get_fqcr_and_name(redirect, collection=acr.collection)
display.vvv('redirecting (type: {0}) {1}.{2} to {3}'.format(self.type, acr.collection, acr.resource, next_key))
key = next_key
else:
break
try:
pkg = import_module(acr.n_python_package_name)
except ImportError as e:
raise KeyError(to_native(e))
parent_prefix = acr.collection
if acr.subdirs:
parent_prefix = '{0}.{1}'.format(parent_prefix, acr.subdirs)
try:
for dummy, module_name, ispkg in pkgutil.iter_modules(pkg.__path__, prefix=parent_prefix + '.'):
if ispkg:
continue
try:
# use 'parent' loader class to find files, but cannot return this as it can contain
# multiple plugins per file
plugin_impl = super(Jinja2Loader, self).get_with_context(module_name, *args, **kwargs)
except Exception as e:
raise KeyError(to_native(e))
try:
method_map = getattr(plugin_impl.object, self.method_map_name)
plugin_map = method_map().items()
except Exception as e:
display.warning("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(plugin_impl.object._original_path), e))
continue
for func_name, func in plugin_map:
fq_name = '.'.join((parent_prefix, func_name))
src_name = f"ansible_collections.{acr.collection}.plugins.{self.type}.{acr.subdirs}.{func_name}"
# TODO: load anyways into CACHE so we only match each at end of loop
# the files themseves should already be cached by base class caching of modules(python)
if key in (func_name, fq_name):
pclass = self._load_jinja2_class()
plugin = pclass(func)
if plugin:
context = plugin_impl.plugin_load_context
self._update_object(plugin, src_name, plugin_impl.object._original_path, resolved=fq_name)
break # go to next file as it can override if dupe (dont break both loops)
except AnsiblePluginRemovedError as apre:
raise AnsibleError(to_native(apre), 0, orig_exc=apre)
except (AnsibleError, KeyError):
raise
except Exception as ex:
display.warning('An unexpected error occurred during Jinja2 plugin loading: {0}'.format(to_native(ex)))
display.vvv('Unexpected error during Jinja2 plugin loading: {0}'.format(format_exc()))
raise AnsibleError(to_native(ex), 0, orig_exc=ex)
return get_with_context_result(plugin, context)
def all(self, *args, **kwargs):
# inputs, we ignore 'dedupe' we always do, used in base class to find files for this one
path_only = kwargs.pop('path_only', False)
class_only = kwargs.pop('class_only', False) # basically ignored for test/filters since they are functions
# Having both path_only and class_only is a coding bug
if path_only and class_only:
raise AnsibleError('Do not set both path_only and class_only when calling PluginLoader.all()')
found = set()
# get plugins from files in configured paths (mulitple in each)
for p_map in self._j2_all_file_maps(*args, **kwargs):
# p_map is really object from file with class that holds mulitple plugins
plugins_list = getattr(p_map, self.method_map_name)
try:
plugins = plugins_list()
except Exception as e:
display.vvvv("Skipping %s plugins in '%s' as it seems to be invalid: %r" % (self.type, to_text(p_map._original_path), e))
continue
for plugin_name in plugins.keys():
if plugin_name in _PLUGIN_FILTERS[self.package]:
display.debug("%s skipped due to a defined plugin filter" % plugin_name)
continue
if plugin_name in found:
display.debug("%s skipped as duplicate" % plugin_name)
continue
if path_only:
result = p_map._original_path
else:
# loader class is for the file with multiple plugins, but each plugin now has it's own class
pclass = self._load_jinja2_class()
result = pclass(plugins[plugin_name]) # if bad plugin, let exception rise
found.add(plugin_name)
fqcn = plugin_name
collection = '.'.join(p_map.ansible_name.split('.')[:2]) if p_map.ansible_name.count('.') >= 2 else ''
if not plugin_name.startswith(collection):
fqcn = f"{collection}.{plugin_name}"
self._update_object(result, plugin_name, p_map._original_path, resolved=fqcn)
yield result
def _load_jinja2_class(self):
""" override the normal method of plugin classname as these are used in the generic funciton
to access the 'multimap' of filter/tests to function, this is a 'singular' plugin for
each entry.
"""
class_name = 'AnsibleJinja2%s' % get_plugin_class(self.class_name).capitalize()
module = __import__(self.package, fromlist=[class_name])
return getattr(module, class_name)
def _j2_all_file_maps(self, *args, **kwargs):
"""
* Unlike other plugin types, file != plugin, a file can contain multiple plugins (of same type).
This is why we do not deduplicate ansible file names at this point, we mostly care about
the names of the actual jinja2 plugins which are inside of our files.
* This method will NOT fetch collection plugin files, only those that would be expected under 'ansible.builtin/legacy'.
"""
# populate cache if needed
if not self._loaded_j2_file_maps:
# We don't deduplicate ansible file names.
# Instead, calling code deduplicates jinja2 plugin names when loading each file.
kwargs['_dedupe'] = False
# To match correct precedence, call base class' all() to get a list of files,
self._loaded_j2_file_maps = list(super(Jinja2Loader, self).all(*args, **kwargs))
return self._loaded_j2_file_maps
def get_fqcr_and_name(resource, collection='ansible.builtin'):
if '.' not in resource:
name = resource
fqcr = collection + '.' + resource
else:
name = resource.split('.')[-1]
fqcr = resource
return fqcr, name
def _load_plugin_filter():
filters = defaultdict(frozenset)
user_set = False
if C.PLUGIN_FILTERS_CFG is None:
filter_cfg = '/etc/ansible/plugin_filters.yml'
else:
filter_cfg = C.PLUGIN_FILTERS_CFG
user_set = True
if os.path.exists(filter_cfg):
with open(filter_cfg, 'rb') as f:
try:
filter_data = from_yaml(f.read())
except Exception as e:
display.warning(u'The plugin filter file, {0} was not parsable.'
u' Skipping: {1}'.format(filter_cfg, to_text(e)))
return filters
try:
version = filter_data['filter_version']
except KeyError:
display.warning(u'The plugin filter file, {0} was invalid.'
u' Skipping.'.format(filter_cfg))
return filters
# Try to convert for people specifying version as a float instead of string
version = to_text(version)
version = version.strip()
if version == u'1.0':
# Modules and action plugins share the same blacklist since the difference between the
# two isn't visible to the users
try:
filters['ansible.modules'] = frozenset(filter_data['module_blacklist'])
except TypeError:
display.warning(u'Unable to parse the plugin filter file {0} as'
u' module_blacklist is not a list.'
u' Skipping.'.format(filter_cfg))
return filters
filters['ansible.plugins.action'] = filters['ansible.modules']
else:
display.warning(u'The plugin filter file, {0} was a version not recognized by this'
u' version of Ansible. Skipping.'.format(filter_cfg))
else:
if user_set:
display.warning(u'The plugin filter file, {0} does not exist.'
u' Skipping.'.format(filter_cfg))
# Specialcase the stat module as Ansible can run very few things if stat is blacklisted.
if 'stat' in filters['ansible.modules']:
raise AnsibleError('The stat module was specified in the module blacklist file, {0}, but'
' Ansible will not function without the stat module. Please remove stat'
' from the blacklist.'.format(to_native(filter_cfg)))
return filters
# since we don't want the actual collection loader understanding metadata, we'll do it in an event handler
def _on_collection_load_handler(collection_name, collection_path):
display.vvvv(to_text('Loading collection {0} from {1}'.format(collection_name, collection_path)))
collection_meta = _get_collection_metadata(collection_name)
try:
if not _does_collection_support_ansible_version(collection_meta.get('requires_ansible', ''), ansible_version):
mismatch_behavior = C.config.get_config_value('COLLECTIONS_ON_ANSIBLE_VERSION_MISMATCH')
message = 'Collection {0} does not support Ansible version {1}'.format(collection_name, ansible_version)
if mismatch_behavior == 'warning':
display.warning(message)
elif mismatch_behavior == 'error':
raise AnsibleCollectionUnsupportedVersionError(message)
except AnsibleError:
raise
except Exception as ex:
display.warning('Error parsing collection metadata requires_ansible value from collection {0}: {1}'.format(collection_name, ex))
def _does_collection_support_ansible_version(requirement_string, ansible_version):
if not requirement_string:
return True
if not SpecifierSet:
display.warning('packaging Python module unavailable; unable to validate collection Ansible version requirements')
return True
ss = SpecifierSet(requirement_string)
# ignore prerelease/postrelease/beta/dev flags for simplicity
base_ansible_version = Version(ansible_version).base_version
return ss.contains(base_ansible_version)
def _configure_collection_loader():
if AnsibleCollectionConfig.collection_finder:
# this must be a Python warning so that it can be filtered out by the import sanity test
warnings.warn('AnsibleCollectionFinder has already been configured')
return
finder = _AnsibleCollectionFinder(C.COLLECTIONS_PATHS, C.COLLECTIONS_SCAN_SYS_PATH)
finder._install()
# this should succeed now
AnsibleCollectionConfig.on_collection_load += _on_collection_load_handler
# TODO: All of the following is initialization code It should be moved inside of an initialization
# function which is called at some point early in the ansible and ansible-playbook CLI startup.
_PLUGIN_FILTERS = _load_plugin_filter()
_configure_collection_loader()
# doc fragments first
fragment_loader = PluginLoader(
'ModuleDocFragment',
'ansible.plugins.doc_fragments',
C.DOC_FRAGMENT_PLUGIN_PATH,
'doc_fragments',
)
action_loader = PluginLoader(
'ActionModule',
'ansible.plugins.action',
C.DEFAULT_ACTION_PLUGIN_PATH,
'action_plugins',
required_base_class='ActionBase',
)
cache_loader = PluginLoader(
'CacheModule',
'ansible.plugins.cache',
C.DEFAULT_CACHE_PLUGIN_PATH,
'cache_plugins',
)
callback_loader = PluginLoader(
'CallbackModule',
'ansible.plugins.callback',
C.DEFAULT_CALLBACK_PLUGIN_PATH,
'callback_plugins',
)
connection_loader = PluginLoader(
'Connection',
'ansible.plugins.connection',
C.DEFAULT_CONNECTION_PLUGIN_PATH,
'connection_plugins',
aliases={'paramiko': 'paramiko_ssh'},
required_base_class='ConnectionBase',
)
shell_loader = PluginLoader(
'ShellModule',
'ansible.plugins.shell',
'shell_plugins',
'shell_plugins',
)
module_loader = PluginLoader(
'',
'ansible.modules',
C.DEFAULT_MODULE_PATH,
'library',
)
module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
# NB: dedicated loader is currently necessary because PS module_utils expects "with subdir" lookup where
# regular module_utils doesn't. This can be revisited once we have more granular loaders.
ps_module_utils_loader = PluginLoader(
'',
'ansible.module_utils',
C.DEFAULT_MODULE_UTILS_PATH,
'module_utils',
)
lookup_loader = PluginLoader(
'LookupModule',
'ansible.plugins.lookup',
C.DEFAULT_LOOKUP_PLUGIN_PATH,
'lookup_plugins',
required_base_class='LookupBase',
)
filter_loader = Jinja2Loader(
'FilterModule',
'ansible.plugins.filter',
C.DEFAULT_FILTER_PLUGIN_PATH,
'filter_plugins',
)
test_loader = Jinja2Loader(
'TestModule',
'ansible.plugins.test',
C.DEFAULT_TEST_PLUGIN_PATH,
'test_plugins'
)
strategy_loader = PluginLoader(
'StrategyModule',
'ansible.plugins.strategy',
C.DEFAULT_STRATEGY_PLUGIN_PATH,
'strategy_plugins',
required_base_class='StrategyBase',
)
terminal_loader = PluginLoader(
'TerminalModule',
'ansible.plugins.terminal',
C.DEFAULT_TERMINAL_PLUGIN_PATH,
'terminal_plugins',
required_base_class='TerminalBase'
)
vars_loader = PluginLoader(
'VarsModule',
'ansible.plugins.vars',
C.DEFAULT_VARS_PLUGIN_PATH,
'vars_plugins',
)
cliconf_loader = PluginLoader(
'Cliconf',
'ansible.plugins.cliconf',
C.DEFAULT_CLICONF_PLUGIN_PATH,
'cliconf_plugins',
required_base_class='CliconfBase'
)
netconf_loader = PluginLoader(
'Netconf',
'ansible.plugins.netconf',
C.DEFAULT_NETCONF_PLUGIN_PATH,
'netconf_plugins',
required_base_class='NetconfBase'
)
inventory_loader = PluginLoader(
'InventoryModule',
'ansible.plugins.inventory',
C.DEFAULT_INVENTORY_PLUGIN_PATH,
'inventory_plugins'
)
httpapi_loader = PluginLoader(
'HttpApi',
'ansible.plugins.httpapi',
C.DEFAULT_HTTPAPI_PLUGIN_PATH,
'httpapi_plugins',
required_base_class='HttpApiBase',
)
become_loader = PluginLoader(
'BecomeModule',
'ansible.plugins.become',
C.BECOME_PLUGIN_PATH,
'become_plugins'
)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,675 |
More os.path filters
|
### Summary
Please add more filters based on os.path, specifically: `os.path.commonpath`, `os.path.normpath`. It is cheap, but would be handy in such tasks as validation, archive management, programmatic path generation. Can replace a lot of loops and regex filters.
### Issue Type
Feature Idea
### Component Name
core
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78675
|
https://github.com/ansible/ansible/pull/78894
|
7c4d5f509930d832c6cbd5d5660c26e9d73fab58
|
6e949d8f5d6dcf95d6200f529e7d9b7474b568c8
| 2022-08-31T13:27:05Z |
python
| 2022-09-27T17:21:38Z |
docs/docsite/rst/playbook_guide/playbooks_filters.rst
|
.. _playbooks_filters:
********************************
Using filters to manipulate data
********************************
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:
.. code-block:: yaml+jinja
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:
.. code-block:: yaml+jinja
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:
.. code-block:: yaml+jinja
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:
.. code-block:: yaml+jinja
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
A convenient way of requiring a variable to be overridden is to give it an undefined value using the ``undef`` keyword. This can be useful in a role's defaults.
.. code-block:: yaml+jinja
galaxy_url: "https://galaxy.ansible.com"
galaxy_api_key: {{ undef(hint="You must specify your Galaxy API key") }}
Defining different values for true/false/null (ternary)
=======================================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):
.. code-block:: yaml+jinja
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):
.. code-block:: yaml+jinja
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Managing data types
===================
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:
.. code-block:: yaml+jinja
{{ myvar | type_debug }}
You should note that, while this may seem like a useful filter for checking that you have the right type of data in a variable, you should often prefer :ref:`type tests <type_tests>`, which will allow you to test for specific data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:
.. code-block:: yaml+jinja
{{ dict | dict2items }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
tags:
Application: payment
Environment: dev
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:
.. code-block:: yaml+jinja
{{ files | dict2items(key_name='file', value_name='path') }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
files:
users: /etc/passwd
groups: /etc/group
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:
.. code-block:: yaml+jinja
{{ tags | items2dict }}
List data (before applying the ``items2dict`` filter):
.. code-block:: yaml
tags:
- key: Application
value: payment
- key: Environment
value: dev
Dictionary data (after applying the ``items2dict`` filter):
.. code-block:: text
Application: payment
Environment: dev
The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:
.. code-block:: yaml
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:
.. code-block:: yaml+jinja
{{ tags | items2dict(key_name='fruit', value_name='color') }}
If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:
.. code-block:: yaml
- ansible.builtin.debug:
msg: test
when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:
.. code-block:: yaml
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Formatting data: YAML and JSON
==============================
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:
.. code-block:: yaml+jinja
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:
.. code-block:: yaml+jinja
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_ for ``dump()``.
If you are reading in some already formatted data:
.. code-block:: yaml+jinja
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
Filter `to_json` and Unicode support
------------------------------------
By default `to_json` and `to_nice_json` will convert data received to ASCII, so:
.. code-block:: yaml+jinja
{{ 'München'| to_json }}
will return:
.. code-block:: text
'M\u00fcnchen'
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:
.. code-block:: yaml+jinja
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
.. _zip_filter_example:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``:
.. code-block:: yaml+jinja
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]]
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"]]
To always exhaust all lists use ``zip_longest``:
.. code-block:: yaml+jinja
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
# => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]]
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:
.. code-block:: yaml+jinja
{{ dict(keys_list | zip(values_list)) }}
List data (before applying the ``zip`` filter):
.. code-block:: yaml
keys_list:
- one
- two
values_list:
- apple
- orange
Dictionary data (after applying the ``zip`` filter):
.. code-block:: yaml
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:
.. code-block:: yaml+jinja
{{ users | subelements('groups', skip_missing=True) }}
Data before applying the ``subelements`` filter:
.. code-block:: yaml
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Data after applying the ``subelements`` filter:
.. code-block:: yaml
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:
.. code-block:: yaml+jinja
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:
.. code-block:: yaml+jinja
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be:
.. code-block:: text
{'a':1, 'b':3}
The filter can also take multiple arguments to merge:
.. code-block:: yaml+jinja
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged:
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys:
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True) }}
This would result in:
.. code-block:: yaml
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:
.. code-block:: yaml
default:
a:
- default
patch:
a:
- patch
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='keep') }}
This would result in:
.. code-block:: yaml
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append') }}
This would result in:
.. code-block:: yaml
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend') }}
This would result in:
.. code-block:: yaml
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:
.. code-block:: yaml
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in:
.. code-block:: yaml
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together:
.. code-block:: yaml
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):
.. code-block:: yaml+jinja
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be:
.. code-block:: none
['x', 'z']
[42, 31]
The filter can take another argument:
.. code-block:: yaml+jinja
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive lookup inside the container:
.. code-block:: yaml+jinja
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list:
.. code-block:: yaml+jinja
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size:
.. code-block:: yaml+jinja
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example:
.. code-block:: yaml+jinja
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in:
.. code-block:: json
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
.. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <https://jmespath.org/examples.html>`_.
Consider this data structure:
.. code-block:: json
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query:
.. code-block:: yaml+jinja
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names:
.. code-block:: yaml+jinja
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
.. code-block:: yaml+jinja
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with '52:54:00':
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
.. _random_filter_example:
Random items or numbers
-----------------------
The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list:
.. code-block:: yaml+jinja
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 (inclusive) and a specified integer (exclusive):
.. code-block:: yaml+jinja
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
To get a random number from 0 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(step=10) }}
# => 70
To get a random number from 1 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
You can initialize the random number generator from a seed to create random-but-idempotent numbers:
.. code-block:: yaml+jinja
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
Managing list variables
=======================
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers:
.. code-block:: yaml+jinja
{{ list1 | min }}
.. versionadded:: 2.11
To get the minimum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
To get the maximum value from a list of numbers:
.. code-block:: yaml+jinja
{{ [3, 4, 2] | max }}
.. versionadded:: 2.11
To get the maximum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does):
.. code-block:: yaml+jinja
{{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
Flatten only the first level of a list (akin to the `items` lookup):
.. code-block:: yaml+jinja
{{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
.. versionadded:: 2.11
Preserve nulls in a list, by default flatten removes them. :
.. code-block:: yaml+jinja
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
# => [3, None, 4, [2]]
.. _set_theory_filters:
Selecting from sets or lists (set theory)
=========================================
You can select or combine items from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
To get a union of two lists:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
To get the intersection of 2 lists (unique list of all items in both):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
To get the difference of 2 lists (items in 1 that don't exist in 2):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
To get the symmetric difference of 2 lists (items exclusive to each list):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
.. _math_stuff:
Calculating numbers (math)
==========================
.. versionadded:: 1.9
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e):
.. code-block:: yaml+jinja
{{ 8 | log }}
# => 2.0794415416798357
Get the base 10 logarithm:
.. code-block:: yaml+jinja
{{ 8 | log(10) }}
# => 0.9030899869919435
Give me the power of 2! (or 5):
.. code-block:: yaml+jinja
{{ 8 | pow(5) }}
# => 32768.0
Square root, or the 5th:
.. code-block:: yaml+jinja
{{ 8 | root }}
# => 2.8284271247461903
{{ 8 | root(5) }}
# => 1.5157165665103982
Managing network interactions
=============================
These filters help you with common network tasks.
.. note::
These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipaddr }}
You can also require a specific IP protocol version:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use:
.. code-block:: yaml+jinja
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
# => 192.0.2.1
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter:
.. code-block:: yaml+jinja
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:
.. code-block:: none
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note::
For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list:
.. code-block:: yaml+jinja
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
This example renders the following sorted list:
.. code-block:: text
['100,1688,3002-3005,3999']
Another example Jinja template:
.. code-block:: yaml+jinja
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Hashing and encrypting strings and passwords
==============================================
.. versionadded:: 1.9
To get the sha1 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('sha1') }}
# => "b444ac06613fc8d63795be9ad0beaf55011936ac"
To get the md5 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('md5') }}
# => "5a105e8b9d40e1329780d62ea2265d8a"
Get a string checksum:
.. code-block:: yaml+jinja
{{ 'test2' | checksum }}
# => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f"
Other hashes (platform dependent):
.. code-block:: yaml+jinja
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt):
.. code-block:: yaml+jinja
{{ 'passwordsaresecret' | password_hash('sha512') }}
# => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/"
To get a sha256 password hash with a specific salt:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
# => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4"
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
# => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0"
Hash types available depend on the control system running Ansible, 'hash' depends on `hashlib <https://docs.python.org/3.8/library/hashlib.html>`_, password_hash depends on `passlib <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html>`_. The `crypt <https://docs.python.org/3.8/library/crypt.html>`_ is used as a fallback if ``passlib`` is not installed.
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
# => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7"
The filter `password_hash` produces different results depending on whether you installed `passlib` or not.
To ensure idempotency, specify `rounds` to be neither `crypt`'s nor `passlib`'s default, which is `5000` for `crypt` and a variable value (`535000` for sha256, `656000` for sha512) for `passlib`:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=5001) }}
# => "$5$rounds=5001$mysecretsalt$wXcTWWXbfcR8er5IVf7NuquLvnUA6s8/qdtOhAZ.xN."
Hash type 'blowfish' (BCrypt) provides the facility to specify the version of the BCrypt algorithm.
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('blowfish', '1234567890123456789012', ident='2b') }}
# => "$2b$12$123456789012345678901uuJ4qFdej6xnWjOQT.FStqfdoY8dYUPC"
.. note::
The parameter is only available for `blowfish (BCrypt) <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt>`_.
Other hash types will simply ignore this parameter.
Valid values for this parameter are: ['2', '2a', '2y', '2b']
.. versionadded:: 2.12
You can also use the Ansible :ref:`vault <vault>` filter to encrypt data:
.. code-block:: yaml+jinja
# simply encrypt my key in a vault
vars:
myvaultedkey: "{{ keyrawdata|vault(passphrase) }}"
- name: save templated vaulted data
template: src=dump_template_data.j2 dest=/some/key/vault.txt
vars:
mysalt: '{{ 2**256|random(seed=inventory_hostname) }}'
template_data: '{{ secretdata|vault(vaultsecret, salt=mysalt) }}'
And then decrypt it using the unvault filter:
.. code-block:: yaml+jinja
# simply decrypt my key from a vault
vars:
mykey: "{{ myvaultedkey|unvault(passphrase) }}"
- name: save templated unvaulted data
template: src=dump_template_data.j2 dest=/some/key/clear.txt
vars:
template_data: '{{ secretdata|unvault(vaultsecret) }}'
.. _other_useful_filters:
Manipulating text
=================
Several filters work with text, including URLs, file names, and path names.
.. _comment_filter:
Adding comments to files
------------------------
The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:
.. code-block:: yaml+jinja
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):
.. code-block:: yaml+jinja
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter:
.. code-block:: yaml+jinja
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style:
.. code-block:: yaml+jinja
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: ini
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter:
.. code-block:: yaml+jinja
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
URLEncode Variables
-------------------
The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:
.. code-block:: yaml+jinja
{{ 'Trollhättan' | urlencode }}
# => 'Trollh%C3%A4ttan'
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:
.. code-block:: yaml+jinja
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:
.. code-block:: yaml+jinja
# Extracts the database name from a string
{{ 'server1/database42' | regex_search('database[0-9]+') }}
# => 'database42'
# Example for a case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }}
# => 'BAR'
# Extracts server and database id from a string
{{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }}
# => ['1', '42']
# Extracts dividend and divisor from a division
{{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }}
# => ['21', '42']
The ``regex_search`` filter returns an empty string if it cannot find a match:
.. code-block:: yaml+jinja
{{ 'ansible' | regex_search('foobar') }}
# => ''
.. note::
The ``regex_search`` filter returns ``None`` when used in a Jinja expression (for example in conjunction with operators, other filters, and so on). See the two examples below.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') == '' }}
# => False
{{ 'ansible' | regex_search('foobar') is none }}
# => True
This is due to historic behavior and the custom re-implementation of some of the Jinja internals in Ansible. Enable the ``jinja2_native`` setting if you want the ``regex_search`` filter to always return ``None`` if it cannot find a match. See :ref:`jinja2_faqs` for details.
To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:
.. code-block:: yaml+jinja
# Returns a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
# => ['8.8.8.8', '8.8.4.4']
# Returns all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }}
# => ['CAR', 'tar', 'bar']
To replace text in a string with regex, use the ``regex_replace`` filter:
.. code-block:: yaml+jinja
# Convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# => 'able'
# Convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# => 'bar'
# Convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# => 'localhost, 80'
# Convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# => 'localhost'
# Comment all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }}
# => '#CAR\n#tar\nfoo\n#bar\n'
.. note::
If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:
.. code-block:: yaml+jinja
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note::
Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Managing file names and path names
----------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':
.. code-block:: yaml+jinja
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_splitdrive }}
To get only the windows drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | last }}
To get the directory from a path:
.. code-block:: yaml+jinja
{{ path | dirname }}
To get the directory from a windows path (new version 2.0):
.. code-block:: yaml+jinja
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5):
.. code-block:: yaml+jinja
{{ path | expanduser }}
To expand a path containing environment variables:
.. code-block:: yaml+jinja
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8):
.. code-block:: yaml+jinja
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7):
.. code-block:: yaml+jinja
{{ path | relpath('/etc') }}
To get the root and extension of a path or file name (new in version 2.0):
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be '.conf'
{{ path | splitext | last }}
To join one or more path components:
.. code-block:: yaml+jinja
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
Manipulating strings
====================
To add quotes for shell usage:
.. code-block:: yaml+jinja
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
To concatenate a list into a string:
.. code-block:: yaml+jinja
{{ list | join(" ") }}
To split a string into a list:
.. code-block:: yaml+jinja
{{ csv_string | split(",") }}
.. versionadded:: 2.11
To work with Base64 encoded strings:
.. code-block:: yaml+jinja
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:
.. code-block:: yaml+jinja
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
.. versionadded:: 2.6
Managing UUIDs
==============
To create a namespaced UUIDv5:
.. code-block:: yaml+jinja
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':
.. code-block:: yaml+jinja
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:jinja-filters.map>`:
.. code-block:: yaml+jinja
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Handling dates and times
========================
To get a date object from a string use the `to_datetime` filter:
.. code-block:: yaml+jinja
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter:
.. code-block:: yaml+jinja
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. versionadded:: 2.13
strftime takes an optional utc argument, defaulting to False, meaning times are in the local timezone::
{{ '%H:%M:%S' | strftime }} # time now in local timezone
{{ '%H:%M:%S' | strftime(utc=True) }} # time now in UTC
.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
Getting Kubernetes resource names
=================================
.. note::
These filters have migrated to the `kubernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection.
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash:
.. code-block:: yaml+jinja
{{ configmap_resource_definition | kubernetes.core.k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications:
.. code-block:: yaml+jinja
my_secret:
kind: Secret
metadata:
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | kubernetes.core.k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`tips_and_tricks`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,675 |
More os.path filters
|
### Summary
Please add more filters based on os.path, specifically: `os.path.commonpath`, `os.path.normpath`. It is cheap, but would be handy in such tasks as validation, archive management, programmatic path generation. Can replace a lot of loops and regex filters.
### Issue Type
Feature Idea
### Component Name
core
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78675
|
https://github.com/ansible/ansible/pull/78894
|
7c4d5f509930d832c6cbd5d5660c26e9d73fab58
|
6e949d8f5d6dcf95d6200f529e7d9b7474b568c8
| 2022-08-31T13:27:05Z |
python
| 2022-09-27T17:21:38Z |
lib/ansible/plugins/filter/core.py
|
# (c) 2012, Jeroen Hoekx <[email protected]>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import base64
import glob
import hashlib
import json
import ntpath
import os.path
import re
import shlex
import sys
import time
import uuid
import yaml
import datetime
from collections.abc import Mapping
from functools import partial
from random import Random, SystemRandom, shuffle
from jinja2.filters import pass_environment
from ansible.errors import AnsibleError, AnsibleFilterError, AnsibleFilterTypeError
from ansible.module_utils.six import string_types, integer_types, reraise, text_type
from ansible.module_utils._text import to_bytes, to_native, to_text
from ansible.module_utils.common.collections import is_sequence
from ansible.module_utils.common.yaml import yaml_load, yaml_load_all
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.template import recursive_check_defined
from ansible.utils.display import Display
from ansible.utils.encrypt import passlib_or_crypt
from ansible.utils.hashing import md5s, checksum_s
from ansible.utils.unicode import unicode_wrap
from ansible.utils.vars import merge_hash
display = Display()
UUID_NAMESPACE_ANSIBLE = uuid.UUID('361E6D51-FAEC-444A-9079-341386DA8E2E')
def to_yaml(a, *args, **kw):
'''Make verbose, human readable yaml'''
default_flow_style = kw.pop('default_flow_style', None)
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, allow_unicode=True, default_flow_style=default_flow_style, **kw)
except Exception as e:
raise AnsibleFilterError("to_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_nice_yaml(a, indent=4, *args, **kw):
'''Make verbose, human readable yaml'''
try:
transformed = yaml.dump(a, Dumper=AnsibleDumper, indent=indent, allow_unicode=True, default_flow_style=False, **kw)
except Exception as e:
raise AnsibleFilterError("to_nice_yaml - %s" % to_native(e), orig_exc=e)
return to_text(transformed)
def to_json(a, *args, **kw):
''' Convert the value to JSON '''
# defaults for filters
if 'vault_to_text' not in kw:
kw['vault_to_text'] = True
if 'preprocess_unsafe' not in kw:
kw['preprocess_unsafe'] = False
return json.dumps(a, cls=AnsibleJSONEncoder, *args, **kw)
def to_nice_json(a, indent=4, sort_keys=True, *args, **kw):
'''Make verbose, human readable JSON'''
return to_json(a, indent=indent, sort_keys=sort_keys, separators=(',', ': '), *args, **kw)
def to_bool(a):
''' return a bool for the arg '''
if a is None or isinstance(a, bool):
return a
if isinstance(a, string_types):
a = a.lower()
if a in ('yes', 'on', '1', 'true', 1):
return True
return False
def to_datetime(string, format="%Y-%m-%d %H:%M:%S"):
return datetime.datetime.strptime(string, format)
def strftime(string_format, second=None, utc=False):
''' return a date string using string. See https://docs.python.org/3/library/time.html#time.strftime for format '''
if utc:
timefn = time.gmtime
else:
timefn = time.localtime
if second is not None:
try:
second = float(second)
except Exception:
raise AnsibleFilterError('Invalid value for epoch value (%s)' % second)
return time.strftime(string_format, timefn(second))
def quote(a):
''' return its argument quoted for shell usage '''
if a is None:
a = u''
return shlex.quote(to_text(a))
def fileglob(pathname):
''' return list of matched regular files for glob '''
return [g for g in glob.glob(pathname) if os.path.isfile(g)]
def regex_replace(value='', pattern='', replacement='', ignorecase=False, multiline=False):
''' Perform a `re.sub` returning a string '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
_re = re.compile(pattern, flags=flags)
return _re.sub(replacement, value)
def regex_findall(value, regex, multiline=False, ignorecase=False):
''' Perform re.findall and return the list of matches '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
flags = 0
if ignorecase:
flags |= re.I
if multiline:
flags |= re.M
return re.findall(regex, value, flags)
def regex_search(value, regex, *args, **kwargs):
''' Perform re.search and return the list of matches or a backref '''
value = to_text(value, errors='surrogate_or_strict', nonstring='simplerepr')
groups = list()
for arg in args:
if arg.startswith('\\g'):
match = re.match(r'\\g<(\S+)>', arg).group(1)
groups.append(match)
elif arg.startswith('\\'):
match = int(re.match(r'\\(\d+)', arg).group(1))
groups.append(match)
else:
raise AnsibleFilterError('Unknown argument')
flags = 0
if kwargs.get('ignorecase'):
flags |= re.I
if kwargs.get('multiline'):
flags |= re.M
match = re.search(regex, value, flags)
if match:
if not groups:
return match.group()
else:
items = list()
for item in groups:
items.append(match.group(item))
return items
def ternary(value, true_val, false_val, none_val=None):
''' value ? true_val : false_val '''
if value is None and none_val is not None:
return none_val
elif bool(value):
return true_val
else:
return false_val
def regex_escape(string, re_type='python'):
string = to_text(string, errors='surrogate_or_strict', nonstring='simplerepr')
'''Escape all regular expressions special characters from STRING.'''
if re_type == 'python':
return re.escape(string)
elif re_type == 'posix_basic':
# list of BRE special chars:
# https://en.wikibooks.org/wiki/Regular_Expressions/POSIX_Basic_Regular_Expressions
return regex_replace(string, r'([].[^$*\\])', r'\\\1')
# TODO: implement posix_extended
# It's similar to, but different from python regex, which is similar to,
# but different from PCRE. It's possible that re.escape would work here.
# https://remram44.github.io/regex-cheatsheet/regex.html#programs
elif re_type == 'posix_extended':
raise AnsibleFilterError('Regex type (%s) not yet implemented' % re_type)
else:
raise AnsibleFilterError('Invalid regex type (%s)' % re_type)
def from_yaml(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load(text_type(to_text(data, errors='surrogate_or_strict')))
return data
def from_yaml_all(data):
if isinstance(data, string_types):
# The ``text_type`` call here strips any custom
# string wrapper class, so that CSafeLoader can
# read the data
return yaml_load_all(text_type(to_text(data, errors='surrogate_or_strict')))
return data
@pass_environment
def rand(environment, end, start=None, step=None, seed=None):
if seed is None:
r = SystemRandom()
else:
r = Random(seed)
if isinstance(end, integer_types):
if not start:
start = 0
if not step:
step = 1
return r.randrange(start, end, step)
elif hasattr(end, '__iter__'):
if start or step:
raise AnsibleFilterError('start and step can only be used with integer values')
return r.choice(end)
else:
raise AnsibleFilterError('random can only be used on sequences and integers')
def randomize_list(mylist, seed=None):
try:
mylist = list(mylist)
if seed:
r = Random(seed)
r.shuffle(mylist)
else:
shuffle(mylist)
except Exception:
pass
return mylist
def get_hash(data, hashtype='sha1'):
try:
h = hashlib.new(hashtype)
except Exception as e:
# hash is not supported?
raise AnsibleFilterError(e)
h.update(to_bytes(data, errors='surrogate_or_strict'))
return h.hexdigest()
def get_encrypted_password(password, hashtype='sha512', salt=None, salt_size=None, rounds=None, ident=None):
passlib_mapping = {
'md5': 'md5_crypt',
'blowfish': 'bcrypt',
'sha256': 'sha256_crypt',
'sha512': 'sha512_crypt',
}
hashtype = passlib_mapping.get(hashtype, hashtype)
try:
return passlib_or_crypt(password, hashtype, salt=salt, salt_size=salt_size, rounds=rounds, ident=ident)
except AnsibleError as e:
reraise(AnsibleFilterError, AnsibleFilterError(to_native(e), orig_exc=e), sys.exc_info()[2])
def to_uuid(string, namespace=UUID_NAMESPACE_ANSIBLE):
uuid_namespace = namespace
if not isinstance(uuid_namespace, uuid.UUID):
try:
uuid_namespace = uuid.UUID(namespace)
except (AttributeError, ValueError) as e:
raise AnsibleFilterError("Invalid value '%s' for 'namespace': %s" % (to_native(namespace), to_native(e)))
# uuid.uuid5() requires bytes on Python 2 and bytes or text or Python 3
return to_text(uuid.uuid5(uuid_namespace, to_native(string, errors='surrogate_or_strict')))
def mandatory(a, msg=None):
from jinja2.runtime import Undefined
''' Make a variable mandatory '''
if isinstance(a, Undefined):
if a._undefined_name is not None:
name = "'%s' " % to_text(a._undefined_name)
else:
name = ''
if msg is not None:
raise AnsibleFilterError(to_native(msg))
else:
raise AnsibleFilterError("Mandatory variable %s not defined." % name)
return a
def combine(*terms, **kwargs):
recursive = kwargs.pop('recursive', False)
list_merge = kwargs.pop('list_merge', 'replace')
if kwargs:
raise AnsibleFilterError("'recursive' and 'list_merge' are the only valid keyword arguments")
# allow the user to do `[dict1, dict2, ...] | combine`
dictionaries = flatten(terms, levels=1)
# recursively check that every elements are defined (for jinja2)
recursive_check_defined(dictionaries)
if not dictionaries:
return {}
if len(dictionaries) == 1:
return dictionaries[0]
# merge all the dicts so that the dict at the end of the array have precedence
# over the dict at the beginning.
# we merge the dicts from the highest to the lowest priority because there is
# a huge probability that the lowest priority dict will be the biggest in size
# (as the low prio dict will hold the "default" values and the others will be "patches")
# and merge_hash create a copy of it's first argument.
# so high/right -> low/left is more efficient than low/left -> high/right
high_to_low_prio_dict_iterator = reversed(dictionaries)
result = next(high_to_low_prio_dict_iterator)
for dictionary in high_to_low_prio_dict_iterator:
result = merge_hash(dictionary, result, recursive, list_merge)
return result
def comment(text, style='plain', **kw):
# Predefined comment types
comment_styles = {
'plain': {
'decoration': '# '
},
'erlang': {
'decoration': '% '
},
'c': {
'decoration': '// '
},
'cblock': {
'beginning': '/*',
'decoration': ' * ',
'end': ' */'
},
'xml': {
'beginning': '<!--',
'decoration': ' - ',
'end': '-->'
}
}
# Pointer to the right comment type
style_params = comment_styles[style]
if 'decoration' in kw:
prepostfix = kw['decoration']
else:
prepostfix = style_params['decoration']
# Default params
p = {
'newline': '\n',
'beginning': '',
'prefix': (prepostfix).rstrip(),
'prefix_count': 1,
'decoration': '',
'postfix': (prepostfix).rstrip(),
'postfix_count': 1,
'end': ''
}
# Update default params
p.update(style_params)
p.update(kw)
# Compose substrings for the final string
str_beginning = ''
if p['beginning']:
str_beginning = "%s%s" % (p['beginning'], p['newline'])
str_prefix = ''
if p['prefix']:
if p['prefix'] != p['newline']:
str_prefix = str(
"%s%s" % (p['prefix'], p['newline'])) * int(p['prefix_count'])
else:
str_prefix = str(
"%s" % (p['newline'])) * int(p['prefix_count'])
str_text = ("%s%s" % (
p['decoration'],
# Prepend each line of the text with the decorator
text.replace(
p['newline'], "%s%s" % (p['newline'], p['decoration'])))).replace(
# Remove trailing spaces when only decorator is on the line
"%s%s" % (p['decoration'], p['newline']),
"%s%s" % (p['decoration'].rstrip(), p['newline']))
str_postfix = p['newline'].join(
[''] + [p['postfix'] for x in range(p['postfix_count'])])
str_end = ''
if p['end']:
str_end = "%s%s" % (p['newline'], p['end'])
# Return the final string
return "%s%s%s%s%s" % (
str_beginning,
str_prefix,
str_text,
str_postfix,
str_end)
@pass_environment
def extract(environment, item, container, morekeys=None):
if morekeys is None:
keys = [item]
elif isinstance(morekeys, list):
keys = [item] + morekeys
else:
keys = [item, morekeys]
value = container
for key in keys:
value = environment.getitem(value, key)
return value
def b64encode(string, encoding='utf-8'):
return to_text(base64.b64encode(to_bytes(string, encoding=encoding, errors='surrogate_or_strict')))
def b64decode(string, encoding='utf-8'):
return to_text(base64.b64decode(to_bytes(string, errors='surrogate_or_strict')), encoding=encoding)
def flatten(mylist, levels=None, skip_nulls=True):
ret = []
for element in mylist:
if skip_nulls and element in (None, 'None', 'null'):
# ignore null items
continue
elif is_sequence(element):
if levels is None:
ret.extend(flatten(element, skip_nulls=skip_nulls))
elif levels >= 1:
# decrement as we go down the stack
ret.extend(flatten(element, levels=(int(levels) - 1), skip_nulls=skip_nulls))
else:
ret.append(element)
else:
ret.append(element)
return ret
def subelements(obj, subelements, skip_missing=False):
'''Accepts a dict or list of dicts, and a dotted accessor and produces a product
of the element and the results of the dotted accessor
>>> obj = [{"name": "alice", "groups": ["wheel"], "authorized": ["/tmp/alice/onekey.pub"]}]
>>> subelements(obj, 'groups')
[({'name': 'alice', 'groups': ['wheel'], 'authorized': ['/tmp/alice/onekey.pub']}, 'wheel')]
'''
if isinstance(obj, dict):
element_list = list(obj.values())
elif isinstance(obj, list):
element_list = obj[:]
else:
raise AnsibleFilterError('obj must be a list of dicts or a nested dict')
if isinstance(subelements, list):
subelement_list = subelements[:]
elif isinstance(subelements, string_types):
subelement_list = subelements.split('.')
else:
raise AnsibleFilterTypeError('subelements must be a list or a string')
results = []
for element in element_list:
values = element
for subelement in subelement_list:
try:
values = values[subelement]
except KeyError:
if skip_missing:
values = []
break
raise AnsibleFilterError("could not find %r key in iterated item %r" % (subelement, values))
except TypeError:
raise AnsibleFilterTypeError("the key %s should point to a dictionary, got '%s'" % (subelement, values))
if not isinstance(values, list):
raise AnsibleFilterTypeError("the key %r should point to a list, got %r" % (subelement, values))
for value in values:
results.append((element, value))
return results
def dict_to_list_of_dict_key_value_elements(mydict, key_name='key', value_name='value'):
''' takes a dictionary and transforms it into a list of dictionaries,
with each having a 'key' and 'value' keys that correspond to the keys and values of the original '''
if not isinstance(mydict, Mapping):
raise AnsibleFilterTypeError("dict2items requires a dictionary, got %s instead." % type(mydict))
ret = []
for key in mydict:
ret.append({key_name: key, value_name: mydict[key]})
return ret
def list_of_dict_key_value_elements_to_dict(mylist, key_name='key', value_name='value'):
''' takes a list of dicts with each having a 'key' and 'value' keys, and transforms the list into a dictionary,
effectively as the reverse of dict2items '''
if not is_sequence(mylist):
raise AnsibleFilterTypeError("items2dict requires a list, got %s instead." % type(mylist))
try:
return dict((item[key_name], item[value_name]) for item in mylist)
except KeyError:
raise AnsibleFilterTypeError(
"items2dict requires each dictionary in the list to contain the keys '%s' and '%s', got %s instead."
% (key_name, value_name, mylist)
)
except TypeError:
raise AnsibleFilterTypeError("items2dict requires a list of dictionaries, got %s instead." % mylist)
def path_join(paths):
''' takes a sequence or a string, and return a concatenation
of the different members '''
if isinstance(paths, string_types):
return os.path.join(paths)
elif is_sequence(paths):
return os.path.join(*paths)
else:
raise AnsibleFilterTypeError("|path_join expects string or sequence, got %s instead." % type(paths))
def commonpath(paths):
"""
Retrieve the longest common path from the given list.
:param paths: A list of file system paths.
:type paths: List[str]
:returns: The longest common path.
:rtype: str
"""
if not is_sequence(paths):
raise AnsibleFilterTypeError("|path_join expects sequence, got %s instead." % type(paths))
return os.path.commonpath(paths)
class FilterModule(object):
''' Ansible core jinja2 filters '''
def filters(self):
return {
# base 64
'b64decode': b64decode,
'b64encode': b64encode,
# uuid
'to_uuid': to_uuid,
# json
'to_json': to_json,
'to_nice_json': to_nice_json,
'from_json': json.loads,
# yaml
'to_yaml': to_yaml,
'to_nice_yaml': to_nice_yaml,
'from_yaml': from_yaml,
'from_yaml_all': from_yaml_all,
# path
'basename': partial(unicode_wrap, os.path.basename),
'dirname': partial(unicode_wrap, os.path.dirname),
'expanduser': partial(unicode_wrap, os.path.expanduser),
'expandvars': partial(unicode_wrap, os.path.expandvars),
'path_join': path_join,
'realpath': partial(unicode_wrap, os.path.realpath),
'relpath': partial(unicode_wrap, os.path.relpath),
'splitext': partial(unicode_wrap, os.path.splitext),
'win_basename': partial(unicode_wrap, ntpath.basename),
'win_dirname': partial(unicode_wrap, ntpath.dirname),
'win_splitdrive': partial(unicode_wrap, ntpath.splitdrive),
'commonpath': commonpath,
# file glob
'fileglob': fileglob,
# types
'bool': to_bool,
'to_datetime': to_datetime,
# date formatting
'strftime': strftime,
# quote string for shell usage
'quote': quote,
# hash filters
# md5 hex digest of string
'md5': md5s,
# sha1 hex digest of string
'sha1': checksum_s,
# checksum of string as used by ansible for checksumming files
'checksum': checksum_s,
# generic hashing
'password_hash': get_encrypted_password,
'hash': get_hash,
# regex
'regex_replace': regex_replace,
'regex_escape': regex_escape,
'regex_search': regex_search,
'regex_findall': regex_findall,
# ? : ;
'ternary': ternary,
# random stuff
'random': rand,
'shuffle': randomize_list,
# undefined
'mandatory': mandatory,
# comment-style decoration
'comment': comment,
# debug
'type_debug': lambda o: o.__class__.__name__,
# Data structures
'combine': combine,
'extract': extract,
'flatten': flatten,
'dict2items': dict_to_list_of_dict_key_value_elements,
'items2dict': list_of_dict_key_value_elements_to_dict,
'subelements': subelements,
'split': partial(unicode_wrap, text_type.split),
}
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,675 |
More os.path filters
|
### Summary
Please add more filters based on os.path, specifically: `os.path.commonpath`, `os.path.normpath`. It is cheap, but would be handy in such tasks as validation, archive management, programmatic path generation. Can replace a lot of loops and regex filters.
### Issue Type
Feature Idea
### Component Name
core
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78675
|
https://github.com/ansible/ansible/pull/78894
|
7c4d5f509930d832c6cbd5d5660c26e9d73fab58
|
6e949d8f5d6dcf95d6200f529e7d9b7474b568c8
| 2022-08-31T13:27:05Z |
python
| 2022-09-27T17:21:38Z |
lib/ansible/plugins/filter/normpath.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
changelogs/fragments/become-loop-setting.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
lib/ansible/executor/task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
# (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import os
import pty
import time
import json
import signal
import subprocess
import sys
import termios
import traceback
from ansible import constants as C
from ansible.errors import AnsibleError, AnsibleParserError, AnsibleUndefinedVariable, AnsibleConnectionFailure, AnsibleActionFail, AnsibleActionSkip
from ansible.executor.task_result import TaskResult
from ansible.executor.module_common import get_action_args_with_defaults
from ansible.module_utils.parsing.convert_bool import boolean
from ansible.module_utils.six import binary_type
from ansible.module_utils._text import to_text, to_native
from ansible.module_utils.connection import write_to_file_descriptor
from ansible.playbook.conditional import Conditional
from ansible.playbook.task import Task
from ansible.plugins import get_plugin_class
from ansible.plugins.loader import become_loader, cliconf_loader, connection_loader, httpapi_loader, netconf_loader, terminal_loader
from ansible.template import Templar
from ansible.utils.collection_loader import AnsibleCollectionConfig, AnsibleCollectionRef
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.unsafe_proxy import to_unsafe_text, wrap_var
from ansible.vars.clean import namespace_facts, clean_facts
from ansible.utils.display import Display
from ansible.utils.vars import combine_vars, isidentifier
display = Display()
RETURN_VARS = [x for x in C.MAGIC_VARIABLE_MAPPING.items() if 'become' not in x and '_pass' not in x]
__all__ = ['TaskExecutor']
class TaskTimeoutError(BaseException):
pass
def task_timeout(signum, frame):
raise TaskTimeoutError
def remove_omit(task_args, omit_token):
'''
Remove args with a value equal to the ``omit_token`` recursively
to align with now having suboptions in the argument_spec
'''
if not isinstance(task_args, dict):
return task_args
new_args = {}
for i in task_args.items():
if i[1] == omit_token:
continue
elif isinstance(i[1], dict):
new_args[i[0]] = remove_omit(i[1], omit_token)
elif isinstance(i[1], list):
new_args[i[0]] = [remove_omit(v, omit_token) for v in i[1]]
else:
new_args[i[0]] = i[1]
return new_args
class TaskExecutor:
'''
This is the main worker class for the executor pipeline, which
handles loading an action plugin to actually dispatch the task to
a given host. This class roughly corresponds to the old Runner()
class.
'''
def __init__(self, host, task, job_vars, play_context, new_stdin, loader, shared_loader_obj, final_q):
self._host = host
self._task = task
self._job_vars = job_vars
self._play_context = play_context
self._new_stdin = new_stdin
self._loader = loader
self._shared_loader_obj = shared_loader_obj
self._connection = None
self._final_q = final_q
self._loop_eval_error = None
self._task.squash()
def run(self):
'''
The main executor entrypoint, where we determine if the specified
task requires looping and either runs the task with self._run_loop()
or self._execute(). After that, the returned results are parsed and
returned as a dict.
'''
display.debug("in run() - task %s" % self._task._uuid)
try:
try:
items = self._get_loop_items()
except AnsibleUndefinedVariable as e:
# save the error raised here for use later
items = None
self._loop_eval_error = e
if items is not None:
if len(items) > 0:
item_results = self._run_loop(items)
# create the overall result item
res = dict(results=item_results)
# loop through the item results and set the global changed/failed/skipped result flags based on any item.
res['skipped'] = True
for item in item_results:
if 'changed' in item and item['changed'] and not res.get('changed'):
res['changed'] = True
if res['skipped'] and ('skipped' not in item or ('skipped' in item and not item['skipped'])):
res['skipped'] = False
if 'failed' in item and item['failed']:
item_ignore = item.pop('_ansible_ignore_errors')
if not res.get('failed'):
res['failed'] = True
res['msg'] = 'One or more items failed'
self._task.ignore_errors = item_ignore
elif self._task.ignore_errors and not item_ignore:
self._task.ignore_errors = item_ignore
# ensure to accumulate these
for array in ['warnings', 'deprecations']:
if array in item and item[array]:
if array not in res:
res[array] = []
if not isinstance(item[array], list):
item[array] = [item[array]]
res[array] = res[array] + item[array]
del item[array]
if not res.get('failed', False):
res['msg'] = 'All items completed'
if res['skipped']:
res['msg'] = 'All items skipped'
else:
res = dict(changed=False, skipped=True, skipped_reason='No items in the list', results=[])
else:
display.debug("calling self._execute()")
res = self._execute()
display.debug("_execute() done")
# make sure changed is set in the result, if it's not present
if 'changed' not in res:
res['changed'] = False
def _clean_res(res, errors='surrogate_or_strict'):
if isinstance(res, binary_type):
return to_unsafe_text(res, errors=errors)
elif isinstance(res, dict):
for k in res:
try:
res[k] = _clean_res(res[k], errors=errors)
except UnicodeError:
if k == 'diff':
# If this is a diff, substitute a replacement character if the value
# is undecodable as utf8. (Fix #21804)
display.warning("We were unable to decode all characters in the module return data."
" Replaced some in an effort to return as much as possible")
res[k] = _clean_res(res[k], errors='surrogate_then_replace')
else:
raise
elif isinstance(res, list):
for idx, item in enumerate(res):
res[idx] = _clean_res(item, errors=errors)
return res
display.debug("dumping result to json")
res = _clean_res(res)
display.debug("done dumping result, returning")
return res
except AnsibleError as e:
return dict(failed=True, msg=wrap_var(to_text(e, nonstring='simplerepr')), _ansible_no_log=self._play_context.no_log)
except Exception as e:
return dict(failed=True, msg=wrap_var('Unexpected failure during module execution: %s' % (to_native(e, nonstring='simplerepr'))),
exception=to_text(traceback.format_exc()), stdout='', _ansible_no_log=self._play_context.no_log)
finally:
try:
self._connection.close()
except AttributeError:
pass
except Exception as e:
display.debug(u"error closing connection: %s" % to_text(e))
def _get_loop_items(self):
'''
Loads a lookup plugin to handle the with_* portion of a task (if specified),
and returns the items result.
'''
# get search path for this task to pass to lookup plugins
self._job_vars['ansible_search_path'] = self._task.get_search_path()
# ensure basedir is always in (dwim already searches here but we need to display it)
if self._loader.get_basedir() not in self._job_vars['ansible_search_path']:
self._job_vars['ansible_search_path'].append(self._loader.get_basedir())
templar = Templar(loader=self._loader, variables=self._job_vars)
items = None
loop_cache = self._job_vars.get('_ansible_loop_cache')
if loop_cache is not None:
# _ansible_loop_cache may be set in `get_vars` when calculating `delegate_to`
# to avoid reprocessing the loop
items = loop_cache
elif self._task.loop_with:
if self._task.loop_with in self._shared_loader_obj.lookup_loader:
fail = True
if self._task.loop_with == 'first_found':
# first_found loops are special. If the item is undefined then we want to fall through to the next value rather than failing.
fail = False
loop_terms = listify_lookup_plugin_terms(terms=self._task.loop, templar=templar, fail_on_undefined=fail, convert_bare=False)
if not fail:
loop_terms = [t for t in loop_terms if not templar.is_template(t)]
# get lookup
mylookup = self._shared_loader_obj.lookup_loader.get(self._task.loop_with, loader=self._loader, templar=templar)
# give lookup task 'context' for subdir (mostly needed for first_found)
for subdir in ['template', 'var', 'file']: # TODO: move this to constants?
if subdir in self._task.action:
break
setattr(mylookup, '_subdir', subdir + 's')
# run lookup
items = wrap_var(mylookup.run(terms=loop_terms, variables=self._job_vars, wantlist=True))
else:
raise AnsibleError("Unexpected failure in finding the lookup named '%s' in the available lookup plugins" % self._task.loop_with)
elif self._task.loop is not None:
items = templar.template(self._task.loop)
if not isinstance(items, list):
raise AnsibleError(
"Invalid data passed to 'loop', it requires a list, got this instead: %s."
" Hint: If you passed a list/dict of just one element,"
" try adding wantlist=True to your lookup invocation or use q/query instead of lookup." % items
)
return items
def _run_loop(self, items):
'''
Runs the task with the loop items specified and collates the result
into an array named 'results' which is inserted into the final result
along with the item for which the loop ran.
'''
task_vars = self._job_vars
templar = Templar(loader=self._loader, variables=task_vars)
self._task.loop_control.post_validate(templar=templar)
loop_var = self._task.loop_control.loop_var
index_var = self._task.loop_control.index_var
loop_pause = self._task.loop_control.pause
extended = self._task.loop_control.extended
extended_allitems = self._task.loop_control.extended_allitems
# ensure we always have a label
label = self._task.loop_control.label or '{{' + loop_var + '}}'
if loop_var in task_vars:
display.warning(u"%s: The loop variable '%s' is already in use. "
u"You should set the `loop_var` value in the `loop_control` option for the task"
u" to something else to avoid variable collisions and unexpected behavior." % (self._task, loop_var))
ran_once = False
no_log = False
items_len = len(items)
results = []
for item_index, item in enumerate(items):
task_vars['ansible_loop_var'] = loop_var
task_vars[loop_var] = item
if index_var:
task_vars['ansible_index_var'] = index_var
task_vars[index_var] = item_index
if extended:
task_vars['ansible_loop'] = {
'index': item_index + 1,
'index0': item_index,
'first': item_index == 0,
'last': item_index + 1 == items_len,
'length': items_len,
'revindex': items_len - item_index,
'revindex0': items_len - item_index - 1,
}
if extended_allitems:
task_vars['ansible_loop']['allitems'] = items
try:
task_vars['ansible_loop']['nextitem'] = items[item_index + 1]
except IndexError:
pass
if item_index - 1 >= 0:
task_vars['ansible_loop']['previtem'] = items[item_index - 1]
# Update template vars to reflect current loop iteration
templar.available_variables = task_vars
# pause between loop iterations
if loop_pause and ran_once:
time.sleep(loop_pause)
else:
ran_once = True
try:
tmp_task = self._task.copy(exclude_parent=True, exclude_tasks=True)
tmp_task._parent = self._task._parent
tmp_play_context = self._play_context.copy()
except AnsibleParserError as e:
results.append(dict(failed=True, msg=to_text(e)))
continue
# now we swap the internal task and play context with their copies,
# execute, and swap them back so we can do the next iteration cleanly
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
res = self._execute(variables=task_vars)
task_fields = self._task.dump_attrs()
(self._task, tmp_task) = (tmp_task, self._task)
(self._play_context, tmp_play_context) = (tmp_play_context, self._play_context)
# update 'general no_log' based on specific no_log
no_log = no_log or tmp_task.no_log
# now update the result with the item info, and append the result
# to the list of results
res[loop_var] = item
res['ansible_loop_var'] = loop_var
if index_var:
res[index_var] = item_index
res['ansible_index_var'] = index_var
if extended:
res['ansible_loop'] = task_vars['ansible_loop']
res['_ansible_item_result'] = True
res['_ansible_ignore_errors'] = task_fields.get('ignore_errors')
# gets templated here unlike rest of loop_control fields, depends on loop_var above
try:
res['_ansible_item_label'] = templar.template(label)
except AnsibleUndefinedVariable as e:
res.update({
'failed': True,
'msg': 'Failed to template loop_control.label: %s' % to_text(e)
})
tr = TaskResult(
self._host.name,
self._task._uuid,
res,
task_fields=task_fields,
)
if tr.is_failed() or tr.is_unreachable():
self._final_q.send_callback('v2_runner_item_on_failed', tr)
elif tr.is_skipped():
self._final_q.send_callback('v2_runner_item_on_skipped', tr)
else:
if getattr(self._task, 'diff', False):
self._final_q.send_callback('v2_on_file_diff', tr)
if self._task.action not in C._ACTION_INVENTORY_TASKS:
self._final_q.send_callback('v2_runner_item_on_ok', tr)
results.append(res)
del task_vars[loop_var]
# clear 'connection related' plugin variables for next iteration
if self._connection:
clear_plugins = {
'connection': self._connection._load_name,
'shell': self._connection._shell._load_name
}
if self._connection.become:
clear_plugins['become'] = self._connection.become._load_name
for plugin_type, plugin_name in clear_plugins.items():
for var in C.config.get_plugin_vars(plugin_type, plugin_name):
if var in task_vars and var not in self._job_vars:
del task_vars[var]
self._task.no_log = no_log
return results
def _execute(self, variables=None):
'''
The primary workhorse of the executor system, this runs the task
on the specified host (which may be the delegated_to host) and handles
the retry/until and block rescue/always execution
'''
if variables is None:
variables = self._job_vars
templar = Templar(loader=self._loader, variables=variables)
context_validation_error = None
# a certain subset of variables exist.
tempvars = variables.copy()
try:
# TODO: remove play_context as this does not take delegation nor loops correctly into account,
# the task itself should hold the correct values for connection/shell/become/terminal plugin options to finalize.
# Kept for now for backwards compatibility and a few functions that are still exclusive to it.
# apply the given task's information to the connection info,
# which may override some fields already set by the play or
# the options specified on the command line
self._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=variables, templar=templar)
# fields set from the play/task may be based on variables, so we have to
# do the same kind of post validation step on it here before we use it.
self._play_context.post_validate(templar=templar)
# now that the play context is finalized, if the remote_addr is not set
# default to using the host's address field as the remote address
if not self._play_context.remote_addr:
self._play_context.remote_addr = self._host.address
# We also add "magic" variables back into the variables dict to make sure
self._play_context.update_vars(tempvars)
except AnsibleError as e:
# save the error, which we'll raise later if we don't end up
# skipping this task during the conditional evaluation step
context_validation_error = e
no_log = self._play_context.no_log
# Evaluate the conditional (if any) for this task, which we do before running
# the final task post-validation. We do this before the post validation due to
# the fact that the conditional may specify that the task be skipped due to a
# variable not being present which would otherwise cause validation to fail
try:
if not self._task.evaluate_conditional(templar, tempvars):
display.debug("when evaluation is False, skipping this task")
return dict(changed=False, skipped=True, skip_reason='Conditional result was False', _ansible_no_log=no_log)
except AnsibleError as e:
# loop error takes precedence
if self._loop_eval_error is not None:
# Display the error from the conditional as well to prevent
# losing information useful for debugging.
display.v(to_text(e))
raise self._loop_eval_error # pylint: disable=raising-bad-type
raise
# Not skipping, if we had loop error raised earlier we need to raise it now to halt the execution of this task
if self._loop_eval_error is not None:
raise self._loop_eval_error # pylint: disable=raising-bad-type
# if we ran into an error while setting up the PlayContext, raise it now, unless is known issue with delegation
# and undefined vars (correct values are in cvars later on and connection plugins, if still error, blows up there)
if context_validation_error is not None:
raiseit = True
if self._task.delegate_to:
if isinstance(context_validation_error, AnsibleUndefinedVariable):
raiseit = False
elif isinstance(context_validation_error, AnsibleParserError):
# parser error, might be cause by undef too
orig_exc = getattr(context_validation_error, 'orig_exc', None)
if isinstance(orig_exc, AnsibleUndefinedVariable):
raiseit = False
if raiseit:
raise context_validation_error # pylint: disable=raising-bad-type
# set templar to use temp variables until loop is evaluated
templar.available_variables = tempvars
# if this task is a TaskInclude, we just return now with a success code so the
# main thread can expand the task list for the given host
if self._task.action in C._ACTION_ALL_INCLUDE_TASKS:
include_args = self._task.args.copy()
include_file = include_args.pop('_raw_params', None)
if not include_file:
return dict(failed=True, msg="No include file was specified to the include")
include_file = templar.template(include_file)
return dict(include=include_file, include_args=include_args)
# if this task is a IncludeRole, we just return now with a success code so the main thread can expand the task list for the given host
elif self._task.action in C._ACTION_INCLUDE_ROLE:
include_args = self._task.args.copy()
return dict(include_args=include_args)
# Now we do final validation on the task, which sets all fields to their final values.
try:
self._task.post_validate(templar=templar)
except AnsibleError:
raise
except Exception:
return dict(changed=False, failed=True, _ansible_no_log=no_log, exception=to_text(traceback.format_exc()))
if '_variable_params' in self._task.args:
variable_params = self._task.args.pop('_variable_params')
if isinstance(variable_params, dict):
if C.INJECT_FACTS_AS_VARS:
display.warning("Using a variable for a task's 'args' is unsafe in some situations "
"(see https://docs.ansible.com/ansible/devel/reference_appendices/faq.html#argsplat-unsafe)")
variable_params.update(self._task.args)
self._task.args = variable_params
# update no_log to task value, now that we have it templated
no_log = self._task.no_log
# free tempvars up, not used anymore, cvars and vars_copy should be mainly used after this point
# updating the original 'variables' at the end
tempvars = {}
# setup cvars copy, used for all connection related templating
if self._task.delegate_to:
# use vars from delegated host (which already include task vars) instead of original host
cvars = variables.get('ansible_delegated_vars', {}).get(self._task.delegate_to, {})
else:
# just use normal host vars
cvars = variables
templar.available_variables = cvars
# use magic var if it exists, if not, let task inheritance do it's thing.
if cvars.get('ansible_connection') is not None:
current_connection = templar.template(cvars['ansible_connection'])
else:
current_connection = self._task.connection
# get the connection and the handler for this execution
if (not self._connection or
not getattr(self._connection, 'connected', False) or
not self._connection.matches_name([current_connection]) or
# pc compare, left here for old plugins, but should be irrelevant for those
# using get_option, since they are cleared each iteration.
self._play_context.remote_addr != self._connection._play_context.remote_addr):
self._connection = self._get_connection(cvars, templar, current_connection)
else:
# if connection is reused, its _play_context is no longer valid and needs
# to be replaced with the one templated above, in case other data changed
self._connection._play_context = self._play_context
plugin_vars = self._set_connection_options(cvars, templar)
# make a copy of the job vars here, as we update them here and later,
# but don't want to polute original
vars_copy = variables.copy()
# update with connection info (i.e ansible_host/ansible_user)
self._connection.update_vars(vars_copy)
templar.available_variables = vars_copy
# TODO: eventually remove as pc is taken out of the resolution path
# feed back into pc to ensure plugins not using get_option can get correct value
self._connection._play_context = self._play_context.set_task_and_variable_override(task=self._task, variables=vars_copy, templar=templar)
# for persistent connections, initialize socket path and start connection manager
if any(((self._connection.supports_persistence and C.USE_PERSISTENT_CONNECTIONS), self._connection.force_persistence)):
self._play_context.timeout = self._connection.get_option('persistent_command_timeout')
display.vvvv('attempting to start connection', host=self._play_context.remote_addr)
display.vvvv('using connection plugin %s' % self._connection.transport, host=self._play_context.remote_addr)
options = self._connection.get_options()
socket_path = start_connection(self._play_context, options, self._task._uuid)
display.vvvv('local domain socket path is %s' % socket_path, host=self._play_context.remote_addr)
setattr(self._connection, '_socket_path', socket_path)
# TODO: eventually remove this block as this should be a 'consequence' of 'forced_local' modules
# special handling for python interpreter for network_os, default to ansible python unless overriden
if 'ansible_network_os' in cvars and 'ansible_python_interpreter' not in cvars:
# this also avoids 'python discovery'
cvars['ansible_python_interpreter'] = sys.executable
# get handler
self._handler, module_context = self._get_action_handler_with_module_context(connection=self._connection, templar=templar)
if module_context is not None:
module_defaults_fqcn = module_context.resolved_fqcn
else:
module_defaults_fqcn = self._task.resolved_action
# Apply default params for action/module, if present
self._task.args = get_action_args_with_defaults(
module_defaults_fqcn, self._task.args, self._task.module_defaults, templar,
action_groups=self._task._parent._play._action_groups
)
# And filter out any fields which were set to default(omit), and got the omit token value
omit_token = variables.get('omit')
if omit_token is not None:
self._task.args = remove_omit(self._task.args, omit_token)
# Read some values from the task, so that we can modify them if need be
if self._task.until:
retries = self._task.retries
if retries is None:
retries = 3
elif retries <= 0:
retries = 1
else:
retries += 1
else:
retries = 1
delay = self._task.delay
if delay < 0:
delay = 1
display.debug("starting attempt loop")
result = None
for attempt in range(1, retries + 1):
display.debug("running the handler")
try:
if self._task.timeout:
old_sig = signal.signal(signal.SIGALRM, task_timeout)
signal.alarm(self._task.timeout)
result = self._handler.run(task_vars=vars_copy)
except (AnsibleActionFail, AnsibleActionSkip) as e:
return e.result
except AnsibleConnectionFailure as e:
return dict(unreachable=True, msg=to_text(e))
except TaskTimeoutError as e:
msg = 'The %s action failed to execute in the expected time frame (%d) and was terminated' % (self._task.action, self._task.timeout)
return dict(failed=True, msg=msg)
finally:
if self._task.timeout:
signal.alarm(0)
old_sig = signal.signal(signal.SIGALRM, old_sig)
self._handler.cleanup()
display.debug("handler run complete")
# preserve no log
result["_ansible_no_log"] = no_log
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
if self._task.register:
if not isidentifier(self._task.register):
raise AnsibleError("Invalid variable name in 'register' specified: '%s'" % self._task.register)
vars_copy[self._task.register] = result
if self._task.async_val > 0:
if self._task.poll > 0 and not result.get('skipped') and not result.get('failed'):
result = self._poll_async_result(result=result, templar=templar, task_vars=vars_copy)
if result.get('failed'):
self._final_q.send_callback(
'v2_runner_on_async_failed',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
else:
self._final_q.send_callback(
'v2_runner_on_async_ok',
TaskResult(self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()))
# ensure no log is preserved
result["_ansible_no_log"] = no_log
# helper methods for use below in evaluating changed/failed_when
def _evaluate_changed_when_result(result):
if self._task.changed_when is not None and self._task.changed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.changed_when
result['changed'] = cond.evaluate_conditional(templar, vars_copy)
def _evaluate_failed_when_result(result):
if self._task.failed_when:
cond = Conditional(loader=self._loader)
cond.when = self._task.failed_when
failed_when_result = cond.evaluate_conditional(templar, vars_copy)
result['failed_when_result'] = result['failed'] = failed_when_result
else:
failed_when_result = False
return failed_when_result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
if self._task.delegate_to and self._task.delegate_facts:
if '_ansible_delegated_vars' in vars_copy:
vars_copy['_ansible_delegated_vars'].update(result['ansible_facts'])
else:
vars_copy['_ansible_delegated_vars'] = result['ansible_facts']
else:
vars_copy.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
vars_copy['ansible_facts'] = combine_vars(vars_copy.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
vars_copy.update(clean_facts(af))
# set the failed property if it was missing.
if 'failed' not in result:
# rc is here for backwards compatibility and modules that use it instead of 'failed'
if 'rc' in result and result['rc'] not in [0, "0"]:
result['failed'] = True
else:
result['failed'] = False
# Make attempts and retries available early to allow their use in changed/failed_when
if self._task.until:
result['attempts'] = attempt
# set the changed property if it was missing.
if 'changed' not in result:
result['changed'] = False
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# re-update the local copy of vars with the registered value, if specified,
# or any facts which may have been generated by the module execution
# This gives changed/failed_when access to additional recently modified
# attributes of result
if self._task.register:
vars_copy[self._task.register] = result
# if we didn't skip this task, use the helpers to evaluate the changed/
# failed_when properties
if 'skipped' not in result:
condname = 'changed'
try:
_evaluate_changed_when_result(result)
condname = 'failed'
_evaluate_failed_when_result(result)
except AnsibleError as e:
result['failed'] = True
result['%s_when_result' % condname] = to_text(e)
if retries > 1:
cond = Conditional(loader=self._loader)
cond.when = self._task.until
if cond.evaluate_conditional(templar, vars_copy):
break
else:
# no conditional check, or it failed, so sleep for the specified time
if attempt < retries:
result['_ansible_retry'] = True
result['retries'] = retries
display.debug('Retrying task, attempt %d of %d' % (attempt, retries))
self._final_q.send_callback(
'v2_runner_retry',
TaskResult(
self._host.name,
self._task._uuid,
result,
task_fields=self._task.dump_attrs()
)
)
time.sleep(delay)
self._handler = self._get_action_handler(connection=self._connection, templar=templar)
else:
if retries > 1:
# we ran out of attempts, so mark the result as failed
result['attempts'] = retries - 1
result['failed'] = True
if self._task.action not in C._ACTION_WITH_CLEAN_FACTS:
result = wrap_var(result)
# do the final update of the local variables here, for both registered
# values and any facts which may have been created
if self._task.register:
variables[self._task.register] = result
if 'ansible_facts' in result and self._task.action not in C._ACTION_DEBUG:
if self._task.action in C._ACTION_WITH_CLEAN_FACTS:
variables.update(result['ansible_facts'])
else:
# TODO: cleaning of facts should eventually become part of taskresults instead of vars
af = wrap_var(result['ansible_facts'])
variables['ansible_facts'] = combine_vars(variables.get('ansible_facts', {}), namespace_facts(af))
if C.INJECT_FACTS_AS_VARS:
variables.update(clean_facts(af))
# save the notification target in the result, if it was specified, as
# this task may be running in a loop in which case the notification
# may be item-specific, ie. "notify: service {{item}}"
if self._task.notify is not None:
result['_ansible_notify'] = self._task.notify
# add the delegated vars to the result, so we can reference them
# on the results side without having to do any further templating
# also now add conneciton vars results when delegating
if self._task.delegate_to:
result["_ansible_delegated_vars"] = {'ansible_delegated_host': self._task.delegate_to}
for k in plugin_vars:
result["_ansible_delegated_vars"][k] = cvars.get(k)
# note: here for callbacks that rely on this info to display delegation
for requireshed in ('ansible_host', 'ansible_port', 'ansible_user', 'ansible_connection'):
if requireshed not in result["_ansible_delegated_vars"] and requireshed in cvars:
result["_ansible_delegated_vars"][requireshed] = cvars.get(requireshed)
# and return
display.debug("attempt loop complete, returning result")
return result
def _poll_async_result(self, result, templar, task_vars=None):
'''
Polls for the specified JID to be complete
'''
if task_vars is None:
task_vars = self._job_vars
async_jid = result.get('ansible_job_id')
if async_jid is None:
return dict(failed=True, msg="No job id was returned by the async task")
# Create a new pseudo-task to run the async_status module, and run
# that (with a sleep for "poll" seconds between each retry) until the
# async time limit is exceeded.
async_task = Task.load(dict(action='async_status', args={'jid': async_jid}, environment=self._task.environment))
# FIXME: this is no longer the case, normal takes care of all, see if this can just be generalized
# Because this is an async task, the action handler is async. However,
# we need the 'normal' action handler for the status check, so get it
# now via the action_loader
async_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=async_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
time_left = self._task.async_val
while time_left > 0:
time.sleep(self._task.poll)
try:
async_result = async_handler.run(task_vars=task_vars)
# We do not bail out of the loop in cases where the failure
# is associated with a parsing error. The async_runner can
# have issues which result in a half-written/unparseable result
# file on disk, which manifests to the user as a timeout happening
# before it's time to timeout.
if (int(async_result.get('finished', 0)) == 1 or
('failed' in async_result and async_result.get('_ansible_parsed', False)) or
'skipped' in async_result):
break
except Exception as e:
# Connections can raise exceptions during polling (eg, network bounce, reboot); these should be non-fatal.
# On an exception, call the connection's reset method if it has one
# (eg, drop/recreate WinRM connection; some reused connections are in a broken state)
display.vvvv("Exception during async poll, retrying... (%s)" % to_text(e))
display.debug("Async poll exception was:\n%s" % to_text(traceback.format_exc()))
try:
async_handler._connection.reset()
except AttributeError:
pass
# Little hack to raise the exception if we've exhausted the timeout period
time_left -= self._task.poll
if time_left <= 0:
raise
else:
time_left -= self._task.poll
self._final_q.send_callback(
'v2_runner_on_async_poll',
TaskResult(
self._host.name,
async_task._uuid,
async_result,
task_fields=async_task.dump_attrs(),
),
)
if int(async_result.get('finished', 0)) != 1:
if async_result.get('_ansible_parsed'):
return dict(failed=True, msg="async task did not complete within the requested time - %ss" % self._task.async_val, async_result=async_result)
else:
return dict(failed=True, msg="async task produced unparseable results", async_result=async_result)
else:
# If the async task finished, automatically cleanup the temporary
# status file left behind.
cleanup_task = Task.load(
{
'async_status': {
'jid': async_jid,
'mode': 'cleanup',
},
'environment': self._task.environment,
}
)
cleanup_handler = self._shared_loader_obj.action_loader.get(
'ansible.legacy.async_status',
task=cleanup_task,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
)
cleanup_handler.run(task_vars=task_vars)
cleanup_handler.cleanup(force=True)
async_handler.cleanup(force=True)
return async_result
def _get_become(self, name):
become = become_loader.get(name)
if not become:
raise AnsibleError("Invalid become method specified, could not find matching plugin: '%s'. "
"Use `ansible-doc -t become -l` to list available plugins." % name)
return become
def _get_connection(self, cvars, templar, current_connection):
'''
Reads the connection property for the host, and returns the
correct connection object from the list of connection plugins
'''
self._play_context.connection = current_connection
# TODO: play context has logic to update the connection for 'smart'
# (default value, will chose between ssh and paramiko) and 'persistent'
# (really paramiko), eventually this should move to task object itself.
conn_type = self._play_context.connection
connection, plugin_load_context = self._shared_loader_obj.connection_loader.get_with_context(
conn_type,
self._play_context,
self._new_stdin,
task_uuid=self._task._uuid,
ansible_playbook_pid=to_text(os.getppid())
)
if not connection:
raise AnsibleError("the connection plugin '%s' was not found" % conn_type)
# load become plugin if needed
if cvars.get('ansible_become') is not None:
become = boolean(templar.template(cvars['ansible_become']))
else:
become = self._task.become
if become:
if cvars.get('ansible_become_method'):
become_plugin = self._get_become(templar.template(cvars['ansible_become_method']))
else:
become_plugin = self._get_become(self._task.become_method)
try:
connection.set_become_plugin(become_plugin)
except AttributeError:
# Older connection plugin that does not support set_become_plugin
pass
if getattr(connection.become, 'require_tty', False) and not getattr(connection, 'has_tty', False):
raise AnsibleError(
"The '%s' connection does not provide a TTY which is required for the selected "
"become plugin: %s." % (conn_type, become_plugin.name)
)
# Backwards compat for connection plugins that don't support become plugins
# Just do this unconditionally for now, we could move it inside of the
# AttributeError above later
self._play_context.set_become_plugin(become_plugin.name)
# Also backwards compat call for those still using play_context
self._play_context.set_attributes_from_plugin(connection)
return connection
def _set_plugin_options(self, plugin_type, variables, templar, task_keys):
try:
plugin = getattr(self._connection, '_%s' % plugin_type)
except AttributeError:
# Some plugins are assigned to private attrs, ``become`` is not
plugin = getattr(self._connection, plugin_type)
# network_cli's "real" connection plugin is not named connection
# to avoid the confusion of having connection.connection
if plugin_type == "ssh_type_conn":
plugin_type = "connection"
option_vars = C.config.get_plugin_vars(plugin_type, plugin._load_name)
options = {}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# TODO move to task method?
plugin.set_options(task_keys=task_keys, var_options=options)
return option_vars
def _set_connection_options(self, variables, templar):
# keep list of variable names possibly consumed
varnames = []
# grab list of usable vars for this plugin
option_vars = C.config.get_plugin_vars('connection', self._connection._load_name)
varnames.extend(option_vars)
# create dict of 'templated vars'
options = {'_extras': {}}
for k in option_vars:
if k in variables:
options[k] = templar.template(variables[k])
# add extras if plugin supports them
if getattr(self._connection, 'allow_extras', False):
for k in variables:
if k.startswith('ansible_%s_' % self._connection._load_name) and k not in options:
options['_extras'][k] = templar.template(variables[k])
task_keys = self._task.dump_attrs()
# The task_keys 'timeout' attr is the task's timeout, not the connection timeout.
# The connection timeout is threaded through the play_context for now.
task_keys['timeout'] = self._play_context.timeout
if self._play_context.password:
# The connection password is threaded through the play_context for
# now. This is something we ultimately want to avoid, but the first
# step is to get connection plugins pulling the password through the
# config system instead of directly accessing play_context.
task_keys['password'] = self._play_context.password
# Prevent task retries from overriding connection retries
del task_keys['retries']
# set options with 'templated vars' specific to this plugin and dependent ones
self._connection.set_options(task_keys=task_keys, var_options=options)
varnames.extend(self._set_plugin_options('shell', variables, templar, task_keys))
if self._connection.become is not None:
if self._play_context.become_pass:
# FIXME: eventually remove from task and play_context, here for backwards compat
# keep out of play objects to avoid accidental disclosure, only become plugin should have
# The become pass is already in the play_context if given on
# the CLI (-K). Make the plugin aware of it in this case.
task_keys['become_pass'] = self._play_context.become_pass
varnames.extend(self._set_plugin_options('become', variables, templar, task_keys))
# FOR BACKWARDS COMPAT:
for option in ('become_user', 'become_flags', 'become_exe', 'become_pass'):
try:
setattr(self._play_context, option, self._connection.become.get_option(option))
except KeyError:
pass # some plugins don't support all base flags
self._play_context.prompt = self._connection.become.prompt
# deals with networking sub_plugins (network_cli/httpapi/netconf)
sub = getattr(self._connection, '_sub_plugin', None)
if sub is not None and sub.get('type') != 'external':
plugin_type = get_plugin_class(sub.get("obj"))
varnames.extend(self._set_plugin_options(plugin_type, variables, templar, task_keys))
sub_conn = getattr(self._connection, 'ssh_type_conn', None)
if sub_conn is not None:
varnames.extend(self._set_plugin_options("ssh_type_conn", variables, templar, task_keys))
return varnames
def _get_action_handler(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action
'''
return self._get_action_handler_with_module_context(connection, templar)[0]
def _get_action_handler_with_module_context(self, connection, templar):
'''
Returns the correct action plugin to handle the requestion task action and the module context
'''
module_collection, separator, module_name = self._task.action.rpartition(".")
module_prefix = module_name.split('_')[0]
if module_collection:
# For network modules, which look for one action plugin per platform, look for the
# action plugin in the same collection as the module by prefixing the action plugin
# with the same collection.
network_action = "{0}.{1}".format(module_collection, module_prefix)
else:
network_action = module_prefix
collections = self._task.collections
# Check if the module has specified an action handler
module = self._shared_loader_obj.module_loader.find_plugin_with_context(
self._task.action, collection_list=collections
)
if not module.resolved or not module.action_plugin:
module = None
if module is not None:
handler_name = module.action_plugin
# let action plugin override module, fallback to 'normal' action plugin otherwise
elif self._shared_loader_obj.action_loader.has_plugin(self._task.action, collection_list=collections):
handler_name = self._task.action
elif all((module_prefix in C.NETWORK_GROUP_MODULES, self._shared_loader_obj.action_loader.has_plugin(network_action, collection_list=collections))):
handler_name = network_action
display.vvvv("Using network group action {handler} for {action}".format(handler=handler_name,
action=self._task.action),
host=self._play_context.remote_addr)
else:
# use ansible.legacy.normal to allow (historic) local action_plugins/ override without collections search
handler_name = 'ansible.legacy.normal'
collections = None # until then, we don't want the task's collection list to be consulted; use the builtin
handler = self._shared_loader_obj.action_loader.get(
handler_name,
task=self._task,
connection=connection,
play_context=self._play_context,
loader=self._loader,
templar=templar,
shared_loader_obj=self._shared_loader_obj,
collection_list=collections
)
if not handler:
raise AnsibleError("the handler '%s' was not found" % handler_name)
return handler, module
def start_connection(play_context, options, task_uuid):
'''
Starts the persistent connection
'''
candidate_paths = [C.ANSIBLE_CONNECTION_PATH or os.path.dirname(sys.argv[0])]
candidate_paths.extend(os.environ.get('PATH', '').split(os.pathsep))
for dirname in candidate_paths:
ansible_connection = os.path.join(dirname, 'ansible-connection')
if os.path.isfile(ansible_connection):
display.vvvv("Found ansible-connection at path {0}".format(ansible_connection))
break
else:
raise AnsibleError("Unable to find location of 'ansible-connection'. "
"Please set or check the value of ANSIBLE_CONNECTION_PATH")
env = os.environ.copy()
env.update({
# HACK; most of these paths may change during the controller's lifetime
# (eg, due to late dynamic role includes, multi-playbook execution), without a way
# to invalidate/update, ansible-connection won't always see the same plugins the controller
# can.
'ANSIBLE_BECOME_PLUGINS': become_loader.print_paths(),
'ANSIBLE_CLICONF_PLUGINS': cliconf_loader.print_paths(),
'ANSIBLE_COLLECTIONS_PATH': to_native(os.pathsep.join(AnsibleCollectionConfig.collection_paths)),
'ANSIBLE_CONNECTION_PLUGINS': connection_loader.print_paths(),
'ANSIBLE_HTTPAPI_PLUGINS': httpapi_loader.print_paths(),
'ANSIBLE_NETCONF_PLUGINS': netconf_loader.print_paths(),
'ANSIBLE_TERMINAL_PLUGINS': terminal_loader.print_paths(),
})
verbosity = []
if display.verbosity:
verbosity.append('-%s' % ('v' * display.verbosity))
python = sys.executable
master, slave = pty.openpty()
p = subprocess.Popen(
[python, ansible_connection, *verbosity, to_text(os.getppid()), to_text(task_uuid)],
stdin=slave, stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env
)
os.close(slave)
# We need to set the pty into noncanonical mode. This ensures that we
# can receive lines longer than 4095 characters (plus newline) without
# truncating.
old = termios.tcgetattr(master)
new = termios.tcgetattr(master)
new[3] = new[3] & ~termios.ICANON
try:
termios.tcsetattr(master, termios.TCSANOW, new)
write_to_file_descriptor(master, options)
write_to_file_descriptor(master, play_context.serialize())
(stdout, stderr) = p.communicate()
finally:
termios.tcsetattr(master, termios.TCSANOW, old)
os.close(master)
if p.returncode == 0:
result = json.loads(to_text(stdout, errors='surrogate_then_replace'))
else:
try:
result = json.loads(to_text(stderr, errors='surrogate_then_replace'))
except getattr(json.decoder, 'JSONDecodeError', ValueError):
# JSONDecodeError only available on Python 3.5+
result = {'error': to_text(stderr, errors='surrogate_then_replace')}
if 'messages' in result:
for level, message in result['messages']:
if level == 'log':
display.display(message, log_only=True)
elif level in ('debug', 'v', 'vv', 'vvv', 'vvvv', 'vvvvv', 'vvvvvv'):
getattr(display, level)(message, host=play_context.remote_addr)
else:
if hasattr(display, level):
getattr(display, level)(message)
else:
display.vvvv(message, host=play_context.remote_addr)
if 'error' in result:
if display.verbosity > 2:
if result.get('exception'):
msg = "The full traceback is:\n" + result['exception']
display.display(msg, color=C.COLOR_ERROR)
raise AnsibleError(result['error'])
return result['socket_path']
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/integration/targets/loop-connection/aliases
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/integration/targets/loop-connection/collections/ansible_collections/ns/name/meta/runtime.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/integration/targets/loop-connection/collections/ansible_collections/ns/name/plugins/connection/dummy.py
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/integration/targets/loop-connection/main.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/integration/targets/loop-connection/runme.sh
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,425 |
task executor not checking become changes on active connections
|
### Summary
The code that checks the connection + become details in a loop is not applying the become state on connections that have the `connected` attribute set to `True`. If connected is `True` the else block will just re-use the existing connection plugin which won't have the `become` details from the first task on it.
https://github.com/ansible/ansible/blob/27ce607a144917e6b9a453813a7df6bbc9ea2213/lib/ansible/executor/task_executor.py#L562-L572
Note there are some other bugs that mask this problem:
* ssh never sets `connected = True` so will always re-create the connection and subsequently the updated become info
* Using `ansible_connection=paramiko` also works because it never matches the connection `_load_name` of `paramiko_ssh`
* Redirected collections will most likely also mask the problem because of the load_name vs `ansible_connection` value check which is most certainly wrong.
The code needs to be updated to do a proper check for the connection name `self._connection._load_name != current_connection` to actually work properly with redirected plugins and aliased names like paramiko and paramiko_ssh. The code also needs to be updated so that if a connection is re-used then the become plugin needs to be added/deleted if necessary.
The connection should also be closed when being dropped as well.
### Issue Type
Bug Report
### Component Name
task_executor
connection
become
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0.dev0] (low-level-become e91a069260) last updated 2022/08/03 08:18:13 (GMT +1000)
config file = None
configured module search path = ['/home/jborean/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/jborean/dev/ansible/lib/ansible
ansible collection location = /home/jborean/dev:/home/jborean/ansible/collections:/usr/share/ansible/collections
executable location = /home/jborean/.pyenv/versions/ansible-310/bin/ansible
python version = 3.10.2 (main, Jan 18 2022, 12:56:09) [GCC 11.2.1 20211203 (Red Hat 11.2.1-7)] (/home/jborean/.pyenv/versions/3.10.2/envs/ansible-310/bin/python3.10)
jinja version = 3.1.1
libyaml = True
```
### Configuration
```console
N/A
```
### OS / Environment
N/A
### Steps to Reproduce
```yaml
- command: whoami
become: '{{ item }}'
with_items:
- true
- false
```
Doing `false` first reversed the problem where the 2nd iteration will not run with become
### Expected Results
First task is run with become and the 2nd is run without.
### Actual Results
```console
Both tasks are run with the first `become` value set.
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78425
|
https://github.com/ansible/ansible/pull/78565
|
be4807b712d83370561942aa7c3c7f2141759077
|
ba6da65a0f3baefda7a058ebbd0a8dcafb8512f5
| 2022-08-02T23:20:36Z |
python
| 2022-09-29T23:06:10Z |
test/units/executor/test_task_executor.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from unittest import mock
from units.compat import unittest
from unittest.mock import patch, MagicMock
from ansible.errors import AnsibleError
from ansible.executor.task_executor import TaskExecutor, remove_omit
from ansible.plugins.loader import action_loader, lookup_loader, module_loader
from ansible.parsing.yaml.objects import AnsibleUnicode
from ansible.utils.unsafe_proxy import AnsibleUnsafeText, AnsibleUnsafeBytes
from ansible.module_utils.six import text_type
from collections import namedtuple
from units.mock.loader import DictDataLoader
get_with_context_result = namedtuple('get_with_context_result', ['object', 'plugin_load_context'])
class TestTaskExecutor(unittest.TestCase):
def test_task_executor_init(self):
fake_loader = DictDataLoader({})
mock_host = MagicMock()
mock_task = MagicMock()
mock_play_context = MagicMock()
mock_shared_loader = MagicMock()
new_stdin = None
job_vars = dict()
mock_queue = MagicMock()
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=mock_shared_loader,
final_q=mock_queue,
)
def test_task_executor_run(self):
fake_loader = DictDataLoader({})
mock_host = MagicMock()
mock_task = MagicMock()
mock_task._role._role_path = '/path/to/role/foo'
mock_play_context = MagicMock()
mock_shared_loader = MagicMock()
mock_queue = MagicMock()
new_stdin = None
job_vars = dict()
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=mock_shared_loader,
final_q=mock_queue,
)
te._get_loop_items = MagicMock(return_value=None)
te._execute = MagicMock(return_value=dict())
res = te.run()
te._get_loop_items = MagicMock(return_value=[])
res = te.run()
te._get_loop_items = MagicMock(return_value=['a', 'b', 'c'])
te._run_loop = MagicMock(return_value=[dict(item='a', changed=True), dict(item='b', failed=True), dict(item='c')])
res = te.run()
te._get_loop_items = MagicMock(side_effect=AnsibleError(""))
res = te.run()
self.assertIn("failed", res)
def test_task_executor_run_clean_res(self):
te = TaskExecutor(None, MagicMock(), None, None, None, None, None, None)
te._get_loop_items = MagicMock(return_value=[1])
te._run_loop = MagicMock(
return_value=[
{
'unsafe_bytes': AnsibleUnsafeBytes(b'{{ $bar }}'),
'unsafe_text': AnsibleUnsafeText(u'{{ $bar }}'),
'bytes': b'bytes',
'text': u'text',
'int': 1,
}
]
)
res = te.run()
data = res['results'][0]
self.assertIsInstance(data['unsafe_bytes'], AnsibleUnsafeText)
self.assertIsInstance(data['unsafe_text'], AnsibleUnsafeText)
self.assertIsInstance(data['bytes'], text_type)
self.assertIsInstance(data['text'], text_type)
self.assertIsInstance(data['int'], int)
def test_task_executor_get_loop_items(self):
fake_loader = DictDataLoader({})
mock_host = MagicMock()
mock_task = MagicMock()
mock_task.loop_with = 'items'
mock_task.loop = ['a', 'b', 'c']
mock_play_context = MagicMock()
mock_shared_loader = MagicMock()
mock_shared_loader.lookup_loader = lookup_loader
new_stdin = None
job_vars = dict()
mock_queue = MagicMock()
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=mock_shared_loader,
final_q=mock_queue,
)
items = te._get_loop_items()
self.assertEqual(items, ['a', 'b', 'c'])
def test_task_executor_run_loop(self):
items = ['a', 'b', 'c']
fake_loader = DictDataLoader({})
mock_host = MagicMock()
def _copy(exclude_parent=False, exclude_tasks=False):
new_item = MagicMock()
return new_item
mock_task = MagicMock()
mock_task.copy.side_effect = _copy
mock_play_context = MagicMock()
mock_shared_loader = MagicMock()
mock_queue = MagicMock()
new_stdin = None
job_vars = dict()
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=mock_shared_loader,
final_q=mock_queue,
)
def _execute(variables):
return dict(item=variables.get('item'))
te._execute = MagicMock(side_effect=_execute)
res = te._run_loop(items)
self.assertEqual(len(res), 3)
def test_task_executor_get_action_handler(self):
te = TaskExecutor(
host=MagicMock(),
task=MagicMock(),
job_vars={},
play_context=MagicMock(),
new_stdin=None,
loader=DictDataLoader({}),
shared_loader_obj=MagicMock(),
final_q=MagicMock(),
)
context = MagicMock(resolved=False)
te._shared_loader_obj.module_loader.find_plugin_with_context.return_value = context
action_loader = te._shared_loader_obj.action_loader
action_loader.has_plugin.return_value = True
action_loader.get.return_value = mock.sentinel.handler
mock_connection = MagicMock()
mock_templar = MagicMock()
action = 'namespace.prefix_suffix'
te._task.action = action
handler = te._get_action_handler(mock_connection, mock_templar)
self.assertIs(mock.sentinel.handler, handler)
action_loader.has_plugin.assert_called_once_with(
action, collection_list=te._task.collections)
action_loader.get.assert_called_once_with(
te._task.action, task=te._task, connection=mock_connection,
play_context=te._play_context, loader=te._loader,
templar=mock_templar, shared_loader_obj=te._shared_loader_obj,
collection_list=te._task.collections)
def test_task_executor_get_handler_prefix(self):
te = TaskExecutor(
host=MagicMock(),
task=MagicMock(),
job_vars={},
play_context=MagicMock(),
new_stdin=None,
loader=DictDataLoader({}),
shared_loader_obj=MagicMock(),
final_q=MagicMock(),
)
context = MagicMock(resolved=False)
te._shared_loader_obj.module_loader.find_plugin_with_context.return_value = context
action_loader = te._shared_loader_obj.action_loader
action_loader.has_plugin.side_effect = [False, True]
action_loader.get.return_value = mock.sentinel.handler
action_loader.__contains__.return_value = True
mock_connection = MagicMock()
mock_templar = MagicMock()
action = 'namespace.netconf_suffix'
module_prefix = action.split('_', 1)[0]
te._task.action = action
handler = te._get_action_handler(mock_connection, mock_templar)
self.assertIs(mock.sentinel.handler, handler)
action_loader.has_plugin.assert_has_calls([mock.call(action, collection_list=te._task.collections), # called twice
mock.call(module_prefix, collection_list=te._task.collections)])
action_loader.get.assert_called_once_with(
module_prefix, task=te._task, connection=mock_connection,
play_context=te._play_context, loader=te._loader,
templar=mock_templar, shared_loader_obj=te._shared_loader_obj,
collection_list=te._task.collections)
def test_task_executor_get_handler_normal(self):
te = TaskExecutor(
host=MagicMock(),
task=MagicMock(),
job_vars={},
play_context=MagicMock(),
new_stdin=None,
loader=DictDataLoader({}),
shared_loader_obj=MagicMock(),
final_q=MagicMock(),
)
action_loader = te._shared_loader_obj.action_loader
action_loader.has_plugin.return_value = False
action_loader.get.return_value = mock.sentinel.handler
action_loader.__contains__.return_value = False
module_loader = te._shared_loader_obj.module_loader
context = MagicMock(resolved=False)
module_loader.find_plugin_with_context.return_value = context
mock_connection = MagicMock()
mock_templar = MagicMock()
action = 'namespace.prefix_suffix'
module_prefix = action.split('_', 1)[0]
te._task.action = action
handler = te._get_action_handler(mock_connection, mock_templar)
self.assertIs(mock.sentinel.handler, handler)
action_loader.has_plugin.assert_has_calls([mock.call(action, collection_list=te._task.collections),
mock.call(module_prefix, collection_list=te._task.collections)])
action_loader.get.assert_called_once_with(
'ansible.legacy.normal', task=te._task, connection=mock_connection,
play_context=te._play_context, loader=te._loader,
templar=mock_templar, shared_loader_obj=te._shared_loader_obj,
collection_list=None)
def test_task_executor_execute(self):
fake_loader = DictDataLoader({})
mock_host = MagicMock()
mock_task = MagicMock()
mock_task.action = 'mock.action'
mock_task.args = dict()
mock_task.retries = 0
mock_task.delay = -1
mock_task.register = 'foo'
mock_task.until = None
mock_task.changed_when = None
mock_task.failed_when = None
mock_task.post_validate.return_value = None
# mock_task.async_val cannot be left unset, because on Python 3 MagicMock()
# > 0 raises a TypeError There are two reasons for using the value 1
# here: on Python 2 comparing MagicMock() > 0 returns True, and the
# other reason is that if I specify 0 here, the test fails. ;)
mock_task.async_val = 1
mock_task.poll = 0
mock_play_context = MagicMock()
mock_play_context.post_validate.return_value = None
mock_play_context.update_vars.return_value = None
mock_connection = MagicMock()
mock_connection.force_persistence = False
mock_connection.supports_persistence = False
mock_connection.set_host_overrides.return_value = None
mock_connection._connect.return_value = None
mock_action = MagicMock()
mock_queue = MagicMock()
shared_loader = MagicMock()
new_stdin = None
job_vars = dict(omit="XXXXXXXXXXXXXXXXXXX")
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=shared_loader,
final_q=mock_queue,
)
te._get_connection = MagicMock(return_value=mock_connection)
context = MagicMock()
te._get_action_handler_with_context = MagicMock(return_value=get_with_context_result(mock_action, context))
mock_action.run.return_value = dict(ansible_facts=dict())
res = te._execute()
mock_task.changed_when = MagicMock(return_value=AnsibleUnicode("1 == 1"))
res = te._execute()
mock_task.changed_when = None
mock_task.failed_when = MagicMock(return_value=AnsibleUnicode("1 == 1"))
res = te._execute()
mock_task.failed_when = None
mock_task.evaluate_conditional.return_value = False
res = te._execute()
mock_task.evaluate_conditional.return_value = True
mock_task.args = dict(_raw_params='foo.yml', a='foo', b='bar')
mock_task.action = 'include'
res = te._execute()
def test_task_executor_poll_async_result(self):
fake_loader = DictDataLoader({})
mock_host = MagicMock()
mock_task = MagicMock()
mock_task.async_val = 0.1
mock_task.poll = 0.05
mock_play_context = MagicMock()
mock_connection = MagicMock()
mock_action = MagicMock()
mock_queue = MagicMock()
shared_loader = MagicMock()
shared_loader.action_loader = action_loader
new_stdin = None
job_vars = dict(omit="XXXXXXXXXXXXXXXXXXX")
te = TaskExecutor(
host=mock_host,
task=mock_task,
job_vars=job_vars,
play_context=mock_play_context,
new_stdin=new_stdin,
loader=fake_loader,
shared_loader_obj=shared_loader,
final_q=mock_queue,
)
te._connection = MagicMock()
def _get(*args, **kwargs):
mock_action = MagicMock()
mock_action.run.return_value = dict(stdout='')
return mock_action
# testing with some bad values in the result passed to poll async,
# and with a bad value returned from the mock action
with patch.object(action_loader, 'get', _get):
mock_templar = MagicMock()
res = te._poll_async_result(result=dict(), templar=mock_templar)
self.assertIn('failed', res)
res = te._poll_async_result(result=dict(ansible_job_id=1), templar=mock_templar)
self.assertIn('failed', res)
def _get(*args, **kwargs):
mock_action = MagicMock()
mock_action.run.return_value = dict(finished=1)
return mock_action
# now testing with good values
with patch.object(action_loader, 'get', _get):
mock_templar = MagicMock()
res = te._poll_async_result(result=dict(ansible_job_id=1), templar=mock_templar)
self.assertEqual(res, dict(finished=1))
def test_recursive_remove_omit(self):
omit_token = 'POPCORN'
data = {
'foo': 'bar',
'baz': 1,
'qux': ['one', 'two', 'three'],
'subdict': {
'remove': 'POPCORN',
'keep': 'not_popcorn',
'subsubdict': {
'remove': 'POPCORN',
'keep': 'not_popcorn',
},
'a_list': ['POPCORN'],
},
'a_list': ['POPCORN'],
'list_of_lists': [
['some', 'thing'],
],
'list_of_dicts': [
{
'remove': 'POPCORN',
}
],
}
expected = {
'foo': 'bar',
'baz': 1,
'qux': ['one', 'two', 'three'],
'subdict': {
'keep': 'not_popcorn',
'subsubdict': {
'keep': 'not_popcorn',
},
'a_list': ['POPCORN'],
},
'a_list': ['POPCORN'],
'list_of_lists': [
['some', 'thing'],
],
'list_of_dicts': [{}],
}
self.assertEqual(remove_omit(data, omit_token), expected)
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,960 |
Docs: Add code-block wrappers to code examples: testing_documentation.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `testing_documentation.rst` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there is 1 instance where a lead-in sentence ends with `::`. Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep testing_documentation.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/testing_documentation.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78960
|
https://github.com/ansible/ansible/pull/78966
|
5b239acb77cc72758c87eb322d7766fd2b621fbb
|
538b99781f81b87ff2cd18ee6eae1db49a29c37a
| 2022-10-01T12:07:13Z |
python
| 2022-10-01T13:37:42Z |
docs/docsite/rst/dev_guide/testing_documentation.rst
|
:orphan:
.. _testing_module_documentation:
****************************
Testing module documentation
****************************
Before you submit a module for inclusion in the main Ansible repo, you must test your module documentation for correct HTML rendering and to ensure that the argspec matches the documentation in your Python file. The community pages offer more information on :ref:`testing reStructuredText documentation <testing_documentation_locally>`.
To check the HTML output of your module documentation:
#. Ensure working :ref:`development environment <environment_setup>`.
#. Install required Python packages (drop '--user' in venv/virtualenv):
.. code-block:: bash
pip install --user -r requirements.txt
pip install --user -r docs/docsite/requirements.txt
#. Ensure your module is in the correct directory: ``lib/ansible/modules/$CATEGORY/mymodule.py``.
#. Build HTML from your module documentation: ``MODULES=mymodule make webdocs``.
#. To build the HTML documentation for multiple modules, use a comma-separated list of module names: ``MODULES=mymodule,mymodule2 make webdocs``.
#. View the HTML page at ``file:///path/to/docs/docsite/_build/html/modules/mymodule_module.html``.
To ensure that your module documentation matches your ``argument_spec``:
#. Install required Python packages (drop '--user' in venv/virtualenv):
.. code-block:: bash
pip install --user -r test/lib/ansible_test/_data/requirements/sanity.txt
#. run the ``validate-modules`` test::
ansible-test sanity --test validate-modules mymodule
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,962 |
Docs: Add code-block wrappers to code examples: testing_pep8.rst
|
### This issue has a PR
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `testing_pep8.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there is one instance where a lead-in sentence ends with `::`. Use the following grep command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep testing_pep8.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/testing_pep8.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78962
|
https://github.com/ansible/ansible/pull/78972
|
538b99781f81b87ff2cd18ee6eae1db49a29c37a
|
01484cdc68e2c8634bab5d8ffc2043e8d7471ee6
| 2022-10-01T12:34:29Z |
python
| 2022-10-01T14:37:40Z |
docs/docsite/rst/dev_guide/testing_pep8.rst
|
:orphan:
.. _testing_pep8:
*****
PEP 8
*****
.. contents:: Topics
`PEP 8`_ style guidelines are enforced by `pycodestyle`_ on all python files in the repository by default.
Running Locally
===============
The `PEP 8`_ check can be run locally with::
ansible-test sanity --test pep8 [file-or-directory-path-to-check] ...
.. _PEP 8: https://www.python.org/dev/peps/pep-0008/
.. _pycodestyle: https://pypi.org/project/pycodestyle/
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,961 |
Docs: Add code-block wrappers to code examples: developing_module_utilities.rst
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `developing_module_utilities.rst ` file in the Developer Guide (docs/docsite/rst/dev_guide), there is one instance where a lead-in sentence ends with `::`.
Use the following grep command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_module_utilities.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/developing_module_utilities.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78961
|
https://github.com/ansible/ansible/pull/78968
|
01484cdc68e2c8634bab5d8ffc2043e8d7471ee6
|
1db75a41bff0eedc0fafaaef0180b1c7c6912b2a
| 2022-10-01T12:28:30Z |
python
| 2022-10-01T14:51:02Z |
docs/docsite/rst/dev_guide/developing_module_utilities.rst
|
.. _developing_module_utilities:
*************************************
Using and developing module utilities
*************************************
Ansible provides a number of module utilities, or snippets of shared code, that
provide helper functions you can use when developing your own modules. The
``basic.py`` module utility provides the main entry point for accessing the
Ansible library, and all Python Ansible modules must import something from
``ansible.module_utils``. A common option is to import ``AnsibleModule``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
The ``ansible.module_utils`` namespace is not a plain Python package: it is
constructed dynamically for each task invocation, by extracting imports and
resolving those matching the namespace against a :ref:`search path <ansible_search_path>` derived from the
active configuration.
To reduce the maintenance burden in a collection or in local modules, you can extract
duplicated code into one or more module utilities and import them into your modules. For example, if you have your own custom modules that import a ``my_shared_code`` library, you can place that into a ``./module_utils/my_shared_code.py`` file like this::
from ansible.module_utils.my_shared_code import MySharedCodeClient
When you run ``ansible-playbook``, Ansible will merge any files in your local ``module_utils`` directories into the ``ansible.module_utils`` namespace in the order defined by the :ref:`Ansible search path <ansible_search_path>`.
Naming and finding module utilities
===================================
You can generally tell what a module utility does from its name and/or its location. Generic utilities (shared code used by many different kinds of modules) live in the main ansible/ansible codebase, in the ``common`` subdirectory or in the root directory of ``lib/ansible/module_utils``. Utilities used by a particular set of modules generally live in the same collection as those modules. For example:
* ``lib/ansible/module_utils/urls.py`` contains shared code for parsing URLs
* ``openstack.cloud.plugins.module_utils.openstack.py`` contains utilities for modules that work with OpenStack instances
* ``ansible.netcommon.plugins.module_utils.network.common.config.py`` contains utility functions for use by networking modules
Following this pattern with your own module utilities makes everything easy to find and use.
.. _standard_mod_utils:
Standard module utilities
=========================
Ansible ships with an extensive library of ``module_utils`` files. You can find the module utility source code in the ``lib/ansible/module_utils`` directory under your main Ansible path. We describe the most widely used utilities below. For more details on any specific module utility, please see the `source code for module_utils <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils>`_.
.. include:: shared_snippets/licensing.txt
- ``api.py`` - Supports generic API modules
- ``basic.py`` - General definitions and helper utilities for Ansible modules
- ``common/dict_transformations.py`` - Helper functions for dictionary transformations
- ``common/file.py`` - Helper functions for working with files
- ``common/text/`` - Helper functions for converting and formatting text
- ``common/parameters.py`` - Helper functions for dealing with module parameters
- ``common/sys_info.py`` - Functions for getting distribution and platform information
- ``common/validation.py`` - Helper functions for validating module parameters against a module argument spec
- ``facts/`` - Directory of utilities for modules that return facts. See `PR 23012 <https://github.com/ansible/ansible/pull/23012>`_ for more information
- ``json_utils.py`` - Utilities for filtering unrelated output around module JSON output, like leading and trailing lines
- ``powershell/`` - Directory of definitions and helper functions for Windows PowerShell modules
- ``pycompat24.py`` - Exception workaround for Python 2.4
- ``service.py`` - Utilities to enable modules to work with Linux services (placeholder, not in use)
- ``six/__init__.py`` - Bundled copy of the `Six Python library <https://pypi.org/project/six/>`_ to aid in writing code compatible with both Python 2 and Python 3
- ``splitter.py`` - String splitting and manipulation utilities for working with Jinja2 templates
- ``urls.py`` - Utilities for working with http and https requests
Several commonly-used utilities migrated to collections in Ansible 2.10, including:
- ``ismount.py`` migrated to ``ansible.posix.plugins.module_utils.mount.py`` - Single helper function that fixes os.path.ismount
- ``known_hosts.py`` migrated to ``community.general.plugins.module_utils.known_hosts.py`` - utilities for working with known_hosts file
For a list of migrated content with destination collections, see https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,963 |
Docs: Add code-block wrappers to code examples: testing.rst.
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `testing.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there are 2 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep testing.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/testing.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78963
|
https://github.com/ansible/ansible/pull/78969
|
1db75a41bff0eedc0fafaaef0180b1c7c6912b2a
|
446406a0c844c365fb12fbf03579ee286847d964
| 2022-10-01T12:42:23Z |
python
| 2022-10-01T14:59:43Z |
docs/docsite/rst/dev_guide/testing.rst
|
.. _developing_testing:
***************
Testing Ansible
***************
.. contents::
:local:
Why test your Ansible contributions?
====================================
If you're a developer, one of the most valuable things you can do is to look at GitHub issues and help fix bugs, since bug-fixing is almost always prioritized over feature development. Even for non-developers, helping to test pull requests for bug fixes and features is still immensely valuable.
Ansible users who understand how to write playbooks and roles should be able to test their work. GitHub pull requests will automatically run a variety of tests (for example, Azure Pipelines) that show bugs in action. However, contributors must also test their work outside of the automated GitHub checks and show evidence of these tests in the PR to ensure that their work will be more likely to be reviewed and merged.
Read on to learn how Ansible is tested, how to test your contributions locally, and how to extend testing capabilities.
If you want to learn about testing collections, read :ref:`testing_collections`
Types of tests
==============
At a high level we have the following classifications of tests:
:compile:
* :ref:`testing_compile`
* Test python code against a variety of Python versions.
:sanity:
* :ref:`testing_sanity`
* Sanity tests are made up of scripts and tools used to perform static code analysis.
* The primary purpose of these tests is to enforce Ansible coding standards and requirements.
:integration:
* :ref:`testing_integration`
* Functional tests of modules and Ansible core functionality.
:units:
* :ref:`testing_units`
* Tests directly against individual parts of the code base.
Testing within GitHub & Azure Pipelines
=======================================
Organization
------------
When Pull Requests (PRs) are created they are tested using Azure Pipelines, a Continuous Integration (CI) tool. Results are shown at the end of every PR.
When Azure Pipelines detects an error and it can be linked back to a file that has been modified in the PR then the relevant lines will be added as a GitHub comment. For example:
.. code-block:: text
The test `ansible-test sanity --test pep8` failed with the following errors:
lib/ansible/modules/network/foo/bar.py:509:17: E265 block comment should start with '# '
The test `ansible-test sanity --test validate-modules` failed with the following error:
lib/ansible/modules/network/foo/bar.py:0:0: E307 version_added should be 2.4. Currently 2.3
From the above example we can see that ``--test pep8`` and ``--test validate-modules`` have identified an issue. The commands given allow you to run the same tests locally to ensure you've fixed all issues without having to push your changes to GitHub and wait for Azure Pipelines, for example:
If you haven't already got Ansible available, use the local checkout by running:
.. code-block:: shell-session
source hacking/env-setup
Then run the tests detailed in the GitHub comment:
.. code-block:: shell-session
ansible-test sanity --test pep8
ansible-test sanity --test validate-modules
If there isn't a GitHub comment stating what's failed you can inspect the results by clicking on the "Details" button under the "checks have failed" message at the end of the PR.
Rerunning a failing CI job
--------------------------
Occasionally you may find your PR fails due to a reason unrelated to your change. This could happen for several reasons, including:
* a temporary issue accessing an external resource, such as a yum or git repo
* a timeout creating a virtual machine to run the tests on
If either of these issues appear to be the case, you can rerun the Azure Pipelines test by:
* adding a comment with ``/rebuild`` (full rebuild) or ``/rebuild_failed`` (rebuild only failed CI nodes) to the PR
* closing and re-opening the PR (full rebuild)
* making another change to the PR and pushing to GitHub
If the issue persists, please contact us in the ``#ansible-devel`` chat channel (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_).
How to test a PR
================
Ideally, code should add tests that prove that the code works. That's not always possible and tests are not always comprehensive, especially when a user doesn't have access to a wide variety of platforms, or is using an API or web service. In these cases, live testing against real equipment can be more valuable than automation that runs against simulated interfaces. In any case, things should always be tested manually the first time as well.
Thankfully, helping to test Ansible is pretty straightforward, assuming you are familiar with how Ansible works.
Setup: Checking out a Pull Request
----------------------------------
You can do this by:
* checking out Ansible
* fetching the proposed changes into a test branch
* testing
* commenting on that particular issue on GitHub
Here's how:
.. warning::
Testing source code from GitHub pull requests sent to us does have some inherent risk, as the source code
sent may have mistakes or malicious code that could have a negative impact on your system. We recommend
doing all testing on a virtual machine, whether a cloud instance, or locally. Some users like Vagrant
or Docker for this, but they are optional. It is also useful to have virtual machines of different Linux or
other flavors, since some features (for example, package managers such as apt or yum) are specific to those OS versions.
Create a fresh area to work:
.. code-block:: shell-session
git clone https://github.com/ansible/ansible.git ansible-pr-testing
cd ansible-pr-testing
Next, find the pull request you'd like to test and make note of its number. It will look something like this::
Use os.path.sep instead of hardcoding / #65381
.. note:: Only test ``ansible:devel``
It is important that the PR request target be ``ansible:devel``, as we do not accept pull requests into any other branch. Dot releases are cherry-picked manually by Ansible staff.
Use the pull request number when you fetch the proposed changes and create your branch for testing:
.. code-block:: shell-session
git fetch origin refs/pull/XXXX/head:testing_PRXXXX
git checkout testing_PRXXXX
The first command fetches the proposed changes from the pull request and creates a new branch named ``testing_PRXXXX``, where the XXXX is the actual number associated with the pull request (for example, 65381). The second command checks out the newly created branch.
.. note::
If the GitHub user interface shows that the pull request will not merge cleanly, we do not recommend proceeding if you are not somewhat familiar with git and coding, as you will have to resolve a merge conflict. This is the responsibility of the original pull request contributor.
.. note::
Some users do not create feature branches, which can cause problems when they have multiple, unrelated commits in their version of ``devel``. If the source looks like ``someuser:devel``, make sure there is only one commit listed on the pull request.
The Ansible source includes a script that allows you to use Ansible directly from source without requiring a
full installation that is frequently used by developers on Ansible.
Simply source it (to use the Linux/Unix terminology) to begin using it immediately:
.. code-block:: shell-session
source ./hacking/env-setup
This script modifies the ``PYTHONPATH`` environment variables (along with a few other things), which will be temporarily
set as long as your shell session is open.
Testing the Pull Request
------------------------
At this point, you should be ready to begin testing!
Some ideas of what to test are:
* Create a test Playbook with the examples in and check if they function correctly
* Test to see if any Python backtraces returned (that's a bug)
* Test on different operating systems, or against different library versions
Run sanity tests
^^^^^^^^^^^^^^^^
.. code:: shell
ansible-test sanity
More information: :ref:`testing_sanity`
Run unit tests
^^^^^^^^^^^^^^
.. code:: shell
ansible-test units
More information: :ref:`testing_units`
Run integration tests
^^^^^^^^^^^^^^^^^^^^^
.. code:: shell
ansible-test integration -v ping
More information: :ref:`testing_integration`
Any potential issues should be added as comments on the pull request (and it's acceptable to comment if the feature works as well), remembering to include the output of ``ansible --version``
Example::
Works for me! Tested on `Ansible 2.3.0`. I verified this on CentOS 6.5 and also Ubuntu 14.04.
If the PR does not resolve the issue, or if you see any failures from the unit/integration tests, just include that output instead:
| This change causes errors for me.
|
| When I ran this Ubuntu 16.04 it failed with the following:
|
| \```
| some output
| StackTrace
| some other output
| \```
Code Coverage Online
^^^^^^^^^^^^^^^^^^^^
`The online code coverage reports <https://codecov.io/gh/ansible/ansible>`_ are a good way
to identify areas for testing improvement in Ansible. By following red colors you can
drill down through the reports to find files which have no tests at all. Adding both
integration and unit tests which show clearly how code should work, verify important
Ansible functions and increase testing coverage in areas where there is none is a valuable
way to help improve Ansible.
The code coverage reports only cover the ``devel`` branch of Ansible where new feature
development takes place. Pull requests and new code will be missing from the codecov.io
coverage reports so local reporting is needed. Most ``ansible-test`` commands allow you
to collect code coverage, this is particularly useful to indicate where to extend
testing. See :ref:`testing_running_locally` for more information.
Want to know more about testing?
================================
If you'd like to know more about the plans for improving testing Ansible then why not join the
`Testing Working Group <https://github.com/ansible/community/blob/main/meetings/README.md>`_.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,967 |
Docs: Add code-block wrappers to code examples in style_guide
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `/style_guide` directory in the Developer Guide (`docs/docsite/rst/dev_guide`), there are 2 instances where files contain lead-in sentences ending with `::`.
```
docs/docsite/rst/dev-guide/style_guide/index.rst
docs/docsite/rst/dev-guide//style_guide/basic_rules.rst
```
Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/style_guide/index.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78967
|
https://github.com/ansible/ansible/pull/78970
|
446406a0c844c365fb12fbf03579ee286847d964
|
3dc0c2135ed451e7fe0839740ad539b73ee4bdc9
| 2022-10-01T13:22:19Z |
python
| 2022-10-01T15:11:07Z |
docs/docsite/rst/dev_guide/style_guide/basic_rules.rst
|
.. _styleguide_basic:
Basic rules
===========
.. contents::
:local:
Use standard American English
-----------------------------
Ansible uses Standard American English. Watch for common words that are spelled differently in American English (color vs colour, organize vs organise, and so on).
Write for a global audience
---------------------------
Everything you say should be understandable by people of different backgrounds and cultures. Avoid idioms and regionalism and maintain a neutral tone that cannot be misinterpreted. Avoid attempts at humor.
Follow naming conventions
-------------------------
Always follow naming conventions and trademarks.
.. good place to link to an Ansible terminology page
Use clear sentence structure
----------------------------
Clear sentence structure means:
- Start with the important information first.
- Avoid padding/adding extra words that make the sentence harder to understand.
- Keep it short - Longer sentences are harder to understand.
Some examples of improving sentences:
Bad:
The unwise walking about upon the area near the cliff edge may result in a dangerous fall and therefore it is recommended that one remains a safe distance to maintain personal safety.
Better:
Danger! Stay away from the cliff.
Bad:
Furthermore, large volumes of water are also required for the process of extraction.
Better:
Extraction also requires large volumes of water.
Avoid verbosity
---------------
Write short, succinct sentences. Avoid terms like:
- "...as has been said before,"
- "..each and every,"
- "...point in time,"
- "...in order to,"
Highlight menu items and commands
---------------------------------
When documenting menus or commands, it helps to **bold** what is important.
For menu procedures, bold the menu names, button names, and so on to help the user find them on the GUI:
1. On the **File** menu, click **Open**.
2. Type a name in the **User Name** field.
3. In the **Open** dialog box, click **Save**.
4. On the toolbar, click the **Open File** icon.
For code or command snippets, use the RST `code-block directive <https://www.sphinx-doc.org/en/1.5/markup/code.html#directive-code-block>`_::
.. code-block:: bash
ssh [email protected]
show config
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,967 |
Docs: Add code-block wrappers to code examples in style_guide
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `/style_guide` directory in the Developer Guide (`docs/docsite/rst/dev_guide`), there are 2 instances where files contain lead-in sentences ending with `::`.
```
docs/docsite/rst/dev-guide/style_guide/index.rst
docs/docsite/rst/dev-guide//style_guide/basic_rules.rst
```
Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" .
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/style_guide/index.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78967
|
https://github.com/ansible/ansible/pull/78970
|
446406a0c844c365fb12fbf03579ee286847d964
|
3dc0c2135ed451e7fe0839740ad539b73ee4bdc9
| 2022-10-01T13:22:19Z |
python
| 2022-10-01T15:11:07Z |
docs/docsite/rst/dev_guide/style_guide/index.rst
|
.. _style_guide:
**********************************
Ansible documentation style guide
**********************************
Welcome to the Ansible style guide!
To create clear, concise, consistent, useful materials on docs.ansible.com, follow these guidelines:
.. contents::
:local:
Linguistic guidelines
=====================
We want the Ansible documentation to be:
* clear
* direct
* conversational
* easy to translate
We want reading the docs to feel like having an experienced, friendly colleague
explain how Ansible works.
Stylistic cheat-sheet
---------------------
This cheat-sheet illustrates a few rules that help achieve the "Ansible tone":
+-------------------------------+------------------------------+----------------------------------------+
| Rule | Good example | Bad example |
+===============================+==============================+========================================+
| Use active voice | You can run a task by | A task can be run by |
+-------------------------------+------------------------------+----------------------------------------+
| Use the present tense | This command creates a | This command will create a |
+-------------------------------+------------------------------+----------------------------------------+
| Address the reader | As you expand your inventory | When the number of managed nodes grows |
+-------------------------------+------------------------------+----------------------------------------+
| Use standard English | Return to this page | Hop back to this page |
+-------------------------------+------------------------------+----------------------------------------+
| Use American English | The color of the output | The colour of the output |
+-------------------------------+------------------------------+----------------------------------------+
Header case
-----------
Headers should be written in sentence case. For example, this section's title is
``Header case``, not ``Header Case`` or ``HEADER CASE``.
Avoid using Latin phrases
-------------------------
Latin words and phrases like ``e.g.`` or ``etc.``
are easily understood by English speakers.
They may be harder to understand for others and are also tricky for automated translation.
Use the following English terms in place of Latin terms or abbreviations:
+-------------------------------+------------------------------+
| Latin | English |
+===============================+==============================+
| i.e | in other words |
+-------------------------------+------------------------------+
| e.g. | for example |
+-------------------------------+------------------------------+
| etc | and so on |
+-------------------------------+------------------------------+
| via | by/ through |
+-------------------------------+------------------------------+
| vs./versus | rather than/against |
+-------------------------------+------------------------------+
reStructuredText guidelines
===========================
The Ansible documentation is written in reStructuredText and processed by Sphinx.
We follow these technical or mechanical guidelines on all rST pages:
.. _headers_style:
Header notation
---------------
`Section headers in reStructuredText <https://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html#sections>`_
can use a variety of notations.
Sphinx will 'learn on the fly' when creating a hierarchy of headers.
To make our documents easy to read and to edit, we follow a standard set of header notations.
We use:
* ``###`` with overline, for parts:
.. code-block:: rst
###############
Developer guide
###############
* ``***`` with overline, for chapters:
.. code-block:: rst
*******************
Ansible style guide
*******************
* ``===`` for sections:
.. code-block:: rst
Mechanical guidelines
=====================
* ``---`` for subsections:
.. code-block:: rst
Internal navigation
-------------------
* ``^^^`` for sub-subsections:
.. code-block:: rst
Adding anchors
^^^^^^^^^^^^^^
* ``"""`` for paragraphs:
.. code-block:: rst
Paragraph that needs a title
""""""""""""""""""""""""""""
Syntax highlighting - Pygments
------------------------------
The Ansible documentation supports a range of `Pygments lexers <https://pygments.org/>`_
for `syntax highlighting <https://www.sphinx-doc.org/en/master/usage/restructuredtext/directives.html#code-examples>`_ to make our code examples look good. Each code-block must be correctly indented and surrounded by blank lines.
The Ansible documentation allows the following values:
* none (no highlighting)
* ansible-output (a custom lexer for Ansible output)
* bash
* console
* csharp
* ini
* json
* powershell
* python
* rst
* sh
* shell
* shell-session
* text
* yaml
* yaml+jinja
For example, you can highlight Python code using following syntax:
.. code-block:: rst
.. code-block:: python
def my_beautiful_python_code():
pass
.. _style_links:
Internal navigation
-------------------
`Anchors (also called labels) and links <https://www.sphinx-doc.org/en/master/usage/restructuredtext/roles.html#ref-role>`_
work together to help users find related content.
Local tables of contents also help users navigate quickly to the information they need.
All internal links should use the ``:ref:`` syntax.
Every page should have at least one anchor to support internal ``:ref:`` links.
Long pages, or pages with multiple levels of headers, can also include a local TOC.
.. note::
Avoid raw URLs. RST and sphinx allow ::code:`https://my.example.com`, but this is unhelpful for those using screen readers. ``:ref:`` links automatically pick up the header from the anchor, but for external links, always use the ::code:`link title <link-url>`_` format.
.. _adding_anchors_rst:
Adding anchors
^^^^^^^^^^^^^^
* Include at least one anchor on every page
* Place the main anchor above the main header
* If the file has a unique title, use that for the main page anchor::
.. _unique_page::
* You may also add anchors elsewhere on the page
Adding internal links
^^^^^^^^^^^^^^^^^^^^^
* All internal links must use ``:ref:`` syntax. These links both point to the anchor defined above:
.. code-block:: rst
:ref:`unique_page`
:ref:`this page <unique_page>`
The second example adds custom text for the link.
Adding links to modules and plugins
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Ansible 2.10 and later require the extended Fully Qualified Collection Name (FQCN) as part of the links:
.. code-block:: text
ansible_collections. + FQCN + _module
For example:
.. code-block:: rst
:ref:`ansible.builtin.first_found lookup plugin <ansible_collections.ansible.builtin.first_found_lookup>`
displays as :ref:`ansible.builtin.first_found lookup plugin <ansible_collections.ansible.builtin.first_found_lookup>`.
Modules require different suffixes from other plugins:
* Module links use this extended FQCN module name with ``_module`` for the anchor.
* Plugin links use this extended FQCN plugin name with the plugin type (``_connection`` for example).
.. code-block:: rst
:ref:`arista.eos.eos_config <ansible_collections.arista.eos.eos_config_module>`
:ref:`kubernetes.core.kubectl connection plugin <ansible_collections.kubernetes.core.kubectl_connection>`
.. note::
``ansible.builtin`` is the FQCN for modules included in ``ansible.base``. Documentation links are the only place you prepend ``ansible_collections`` to the FQCN. This is used by the documentation build scripts to correctly fetch documentation from collections on Ansible Galaxy.
.. _local_toc:
Adding local TOCs
^^^^^^^^^^^^^^^^^
The page you're reading includes a `local TOC <https://docutils.sourceforge.io/docs/ref/rst/directives.html#table-of-contents>`_.
If you include a local TOC:
* place it below, not above, the main heading and (optionally) introductory text
* use the ``:local:`` directive so the page's main header is not included
* do not include a title
The syntax is:
.. code-block:: rst
.. contents::
:local:
Accessibility guidelines
=========================
Ansible documentation has a goal to be more accessible. Use the following guidelines to help us reach this goal.
Images and alternative text
Ensure all icons, images, diagrams, and non text elements have a meaningful alternative-text description. Do not include screen captures of CLI output. Use ``code-block`` instead.
.. code-block:: text
.. image:: path/networkdiag.png
:width: 400
:alt: SpiffyCorp network diagram
Links and hypertext
URLs and cross-reference links have descriptive text that conveys information about the content of the linked target. See :ref:`style_links` for how to format links.
Tables
Tables have a simple, logical reading order from left to right, and top to bottom.
Tables include a header row and avoid empty or blank table cells.
Label tables with a descriptive title.
.. code-block:: reStructuredText
.. table:: File descriptions
+----------+----------------------------+
|File |Purpose |
+==========+============================+
|foo.txt |foo configuration settings |
+----------+----------------------------+
|bar.txt |bar configuration settings |
+----------+----------------------------+
Colors and other visual information
* Avoid instructions that rely solely on sensory characteristics. For example, do not use ``Click the square, blue button to continue.``
* Convey information by methods and not by color alone.
* Ensure there is sufficient contrast between foreground and background text or graphical elements in images and diagrams.
* Instructions for navigating through an interface make sense without directional indicators such as left, right, above, and below.
Accessibility resources
------------------------
Use the following resources to help test your documentation changes:
* `axe DevTools browser extension <https://chrome.google.com/webstore/detail/axe-devtools-web-accessib/lhdoppojpmngadmnindnejefpokejbdd?hl=en-US&_ga=2.98933278.1490638154.1645821120-953800914.1645821120>`_ - Highlights accessibility issues on a website page.
* `WAVE browser extension <https://wave.webaim.org/extension/>`_ from WebAIM - another accessibility tester.
* `Orca screen reader <https://help.gnome.org/users/orca/stable/>`_ - Common tool used by people with vision impairments.
* `color filter <https://www.toptal.com/designers/colorfilter/>`_ - For color-blind testing.
More resources
==============
These pages offer more help with grammatical, stylistic, and technical rules for documentation.
.. toctree::
:maxdepth: 1
basic_rules
voice_style
trademarks
grammar_punctuation
spelling_word_choice
search_hints
resources
.. seealso::
:ref:`community_documentation_contributions`
How to contribute to the Ansible documentation
:ref:`testing_documentation_locally`
How to build the Ansible documentation
`irc.libera.chat <https://libera.chat>`_
#ansible-docs IRC chat channel
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,971 |
Docs: Add code-block wrappers to code examples in module_lifecycle.rst
|
### Issue has been assigned to IMvision12
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `module_lifecycle.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there is one instance where a lead-in sentence ends with `::`. Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep module_lifecycle.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/module_lifecycle.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78971
|
https://github.com/ansible/ansible/pull/78974
|
3dc0c2135ed451e7fe0839740ad539b73ee4bdc9
|
93c2cb2b8e1df8a5e31c2119f4c13bc8f97ed775
| 2022-10-01T14:14:22Z |
python
| 2022-10-01T15:37:57Z |
docs/docsite/rst/dev_guide/module_lifecycle.rst
|
.. _module_lifecycle:
********************************************
The lifecycle of an Ansible module or plugin
********************************************
Modules and plugins in the main Ansible repository have a defined life cycle, from the first introduction to final removal. The module and plugin lifecycle is tied to the `Ansible release cycle <release_cycle>`.
A module or plugin may move through these four stages:
1. When a module or plugin is first accepted into Ansible, we consider it in tech preview and will mark it as such in the documentation.
2. If a module or plugin matures, the 'preview' mark in the documentation is removed. Backward compatibility for these modules and plugins is maintained but not guaranteed, which means their parameters should be maintained with stable meanings.
3. If a module's or plugin's target API changes radically, or if someone creates a better implementation of its functionality, we may mark it deprecated. Modules and plugins that are deprecated are still available but they are reaching the end of their life cycle. We retain deprecated modules and plugins for 4 release cycles with deprecation warnings to help users update playbooks and roles that use them.
4. When a module or plugin has been deprecated for four release cycles, it is removed and replaced with a tombstone entry in the routing configuration. Modules and plugins that are removed are no longer shipped with Ansible. The tombstone entry helps users find alternative modules and plugins.
For modules and plugins in collections, the lifecycle is similar. Since ansible-base 2.10, it is no longer possible to mark modules as 'preview' or 'stable'.
.. _deprecating_modules:
Deprecating modules and plugins in the Ansible main repository
==============================================================
To deprecate a module in ansible-core, you must:
1. Rename the file so it starts with an ``_``, for example, rename ``old_cloud.py`` to ``_old_cloud.py``. This keeps the module available and marks it as deprecated on the module index pages.
2. Mention the deprecation in the relevant changelog (by creating a changelog fragment with a section ``deprecated_features``).
3. Reference the deprecation in the relevant ``porting_guide_core_x.y.rst``.
4. Add ``deprecated:`` to the documentation with the following sub-values:
:removed_in: A ``string``, such as ``"2.10"``; the version of Ansible where the module will be replaced with a docs-only module stub. Usually current release +4. Mutually exclusive with :removed_by_date:.
:remove_by_date: (Added in ansible-base 2.10). An ISO 8601 formatted date when the module will be removed. Usually 2 years from the date the module is deprecated. Mutually exclusive with :removed_in:.
:why: Optional string that used to detail why this has been removed.
:alternative: Inform users they should do instead, for example, ``Use M(whatmoduletouseinstead) instead.``.
* For an example of documenting deprecation, see this `PR that deprecates multiple modules <https://github.com/ansible/ansible/pull/43781/files>`_.
Some of the elements in the PR might now be out of date.
Deprecating modules and plugins in a collection
===============================================
To deprecate a module in a collection, you must:
1. Add a ``deprecation`` entry to ``plugin_routing`` in ``meta/runtime.yml``. For example, to deprecate the module ``old_cloud``, add:
.. code-block:: yaml
plugin_routing:
modules:
old_cloud:
deprecation:
removal_version: 2.0.0
warning_text: Use foo.bar.new_cloud instead.
For other plugin types, you have to replace ``modules:`` with ``<plugin_type>:``, for example ``lookup:`` for lookup plugins.
Instead of ``removal_version``, you can also use ``removal_date`` with an ISO 8601 formatted date after which the module will be removed in a new major version of the collection.
2. Mention the deprecation in the relevant changelog. If the collection uses ``antsibull-changelog``, create a changelog fragment with a section ``deprecated_features``.
3. Add ``deprecated:`` to the documentation of the module or plugin with the following sub-values:
:removed_in: A ``string``, such as ``"2.10"``; the version of Ansible where the module will be replaced with a docs-only module stub. Usually current release +4. Mutually exclusive with :removed_by_date:.
:remove_by_date: (Added in ansible-base 2.10). An ISO 8601 formatted date when the module will be removed. Usually 2 years from the date the module is deprecated. Mutually exclusive with :removed_in:.
:why: Optional string that used to detail why this has been removed.
:alternative: Inform users they should do instead, for example, ``Use M(whatmoduletouseinstead) instead.``.
Changing a module or plugin name in the Ansible main repository
===============================================================
You can also rename a module and keep a deprecated alias to the old name by using a symlink that starts with _.
This example allows the ``stat`` module to be called with ``fileinfo``, making the following examples equivalent::
EXAMPLES = '''
ln -s stat.py _fileinfo.py
ansible -m stat -a "path=/tmp" localhost
ansible -m fileinfo -a "path=/tmp" localhost
'''
Renaming a module or plugin in a collection, or redirecting a module or plugin to another collection
====================================================================================================
To rename a module or plugin in a collection, or to redirect a module or plugin to another collection, you need to add a ``redirect`` entry to ``plugin_routing`` in ``meta/runtime.yml``. For example, to redirect the module ``old_cloud`` to ``foo.bar.new_cloud``, add:
.. code-block:: yaml
plugin_routing:
modules:
old_cloud:
redirect: foo.bar.new_cloud
If you want to deprecate the old name, add a ``deprecation:`` entry (see above):
.. code-block:: yaml
plugin_routing:
modules:
old_cloud:
redirect: foo.bar.new_cloud
deprecation:
removal_version: 2.0.0
warning_text: Use foo.bar.new_cloud instead.
You need to use the Fully Qualified Collection Name (FQCN) of the new module/plugin name, even if it is located in the same collection as the redirect. By using a FQCN from another collection, you redirect the module/plugin to that collection.
If you need to support Ansible 2.9, please note that Ansible 2.9 does not know about ``meta/runtime.yml``. With Ansible 2.9 you can still rename plugins and modules inside one collection by using symbolic links. Note that ansible-base 2.10, ansible-core 2.11, and newer will prefer ``meta/runtime.yml`` entries over symbolic links.
Tombstoning a module or plugin in a collection
==============================================
To remove a deprecated module or plugin from a collection, you need to tombstone it:
1. Remove the module or plugin file with related files like tests, documentation references, and documentation.
2. Add a tombstone entry in ``meta/runtime.yml``. For example, to tombstone the module ``old_cloud``, add:
.. code-block:: yaml
plugin_routing:
modules:
old_cloud:
tombstone:
removal_version: 2.0.0
warning_text: Use foo.bar.new_cloud instead.
Instead of ``removal_version``, you can also use ``removal_date`` with an ISO 8601 formatted date. The date should be the date of the next major release.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,965 |
Docs: Add code-block wrappers to code examples in ignores.rst.
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `ignores.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide/testing/sanity/`), there are 2 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep ignores.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/testing/sanity/ignores.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78965
|
https://github.com/ansible/ansible/pull/78979
|
93c2cb2b8e1df8a5e31c2119f4c13bc8f97ed775
|
9afb37eda605d79348a19d49cd0a615a6691aa3a
| 2022-10-01T13:02:34Z |
python
| 2022-10-01T18:40:28Z |
docs/docsite/rst/dev_guide/testing/sanity/ignores.rst
|
ignores
=======
Sanity tests for individual files can be skipped, and specific errors can be ignored.
When to Ignore Errors
---------------------
Sanity tests are designed to improve code quality and identify common issues with content.
When issues are identified during development, those issues should be corrected.
As development of Ansible continues, sanity tests are expanded to detect issues that previous releases could not.
To allow time for existing content to be updated to pass newer tests, ignore entries can be added.
New content should not use ignores for existing sanity tests.
When code is fixed to resolve sanity test errors, any relevant ignores must also be removed.
If the ignores are not removed, this will be reported as an unnecessary ignore error.
This is intended to prevent future regressions due to the same error recurring after being fixed.
When to Skip Tests
------------------
Although rare, there are reasons for skipping a sanity test instead of ignoring the errors it reports.
If a sanity test results in a traceback when processing content, that error cannot be ignored.
If this occurs, open a new `bug report <https://github.com/ansible/ansible/issues/new?template=bug_report.md>`_ for the issue so it can be fixed.
If the traceback occurs due to an issue with the content, that issue should be fixed.
If the content is correct, the test will need to be skipped until the bug in the sanity test is fixed.
Caution should be used when skipping sanity tests instead of ignoring them.
Since the test is skipped entirely, resolution of the issue will not be automatically detected.
This will prevent prevent regression detection from working once the issue has been resolved.
For this reason it is a good idea to periodically review skipped entries manually to verify they are required.
Ignore File Location
--------------------
The location of the ignore file depends on the type of content being tested.
Ansible Collections
^^^^^^^^^^^^^^^^^^^
Since sanity tests change between Ansible releases, a separate ignore file is needed for each Ansible major release.
The filename is ``tests/sanity/ignore-X.Y.txt`` where ``X.Y`` is the Ansible release being used to test the collection.
Maintaining a separate file for each Ansible release allows a collection to pass tests for multiple versions of Ansible.
Ansible
^^^^^^^
When testing Ansible, all ignores are placed in the ``test/sanity/ignore.txt`` file.
Only a single file is needed because ``ansible-test`` is developed and released as a part of Ansible itself.
Ignore File Format
------------------
The ignore file contains one entry per line.
Each line consists of two columns, separated by a single space.
Comments may be added at the end of an entry, started with a hash (``#``) character, which can be proceeded by zero or more spaces.
Blank and comment only lines are not allowed.
The first column specifies the file path that the entry applies to.
File paths must be relative to the root of the content being tested.
This is either the Ansible source or an Ansible collection.
File paths cannot contain a space or the hash (``#``) character.
The second column specifies the sanity test that the entry applies to.
This will be the name of the sanity test.
If the sanity test is specific to a version of Python, the name will include a dash (``-``) and the relevant Python version.
If the named test uses error codes then the error code to ignore must be appended to the name of the test, separated by a colon (``:``).
Below are some example ignore entries for an Ansible collection::
roles/my_role/files/my_script.sh shellcheck:SC2154 # ignore undefined variable
plugins/modules/my_module.py validate-modules:missing-gplv3-license # ignore license check
plugins/modules/my_module.py import-3.8 # needs update to support collections.abc on Python 3.8+
It is also possible to skip a sanity test for a specific file.
This is done by adding ``!skip`` after the sanity test name in the second column.
When this is done, no error code is included, even if the sanity test uses error codes.
Below are some example skip entries for an Ansible collection::
plugins/module_utils/my_util.py validate-modules!skip # waiting for bug fix in module validator
plugins/lookup/my_plugin.py compile-2.6!skip # Python 2.6 is not supported on the controller
See the full list of :ref:`sanity tests <all_sanity_tests>`, which details the various tests and details how to fix identified issues.
Ignore File Errors
------------------
There are various errors that can be reported for the ignore file itself:
- syntax errors parsing the ignore file
- references a file path that does not exist
- references to a sanity test that does not exist
- ignoring an error that does not occur
- ignoring a file which is skipped
- duplicate entries
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,976 |
Docs: Add code-block wrappers to code examples in developing_collections_distributing.rst
|
### This issue has been assigned to doczkal
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `developing_collections_distributing.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there is one instance where a lead-in sentence ends with `::`. Use the following `grep` command to identify the line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_collections_distributing.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/developing_collections_distributing.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78976
|
https://github.com/ansible/ansible/pull/78977
|
9afb37eda605d79348a19d49cd0a615a6691aa3a
|
1b922b42dd5e18aeff789f2ee6fcb0a43485ac12
| 2022-10-01T15:34:34Z |
python
| 2022-10-03T08:45:44Z |
docs/docsite/rst/dev_guide/developing_collections_distributing.rst
|
.. _distributing_collections:
************************
Distributing collections
************************
A collection is a distribution format for Ansible content. A typical collection contains modules and other plugins that address a set of related use cases. For example, a collection might automate administering a particular database. A collection can also contain roles and playbooks.
To distribute your collection and allow others to use it, you can publish your collection on one or more :term:`distribution server`. Distribution servers include:
================================= ===================================================================
Distribution server Collections accepted
================================= ===================================================================
Ansible Galaxy All collections
:term:`Pulp 3 Galaxy` All collections, supports signed collections
Red Hat Automation Hub Only collections certified by Red Hat, supports signed collections
Privately hosted Automation Hub Collections authorized by the owners
================================= ===================================================================
Distributing collections involves four major steps:
#. Initial configuration of your distribution server or servers
#. Building your collection tarball
#. Preparing to publish your collection
#. Publishing your collection
.. contents::
:local:
:depth: 2
.. _config_distribution_server:
Initial configuration of your distribution server or servers
============================================================
Configure a connection to one or more distribution servers so you can publish collections there. You only need to configure each distribution server once. You must repeat the other steps (building your collection tarball, preparing to publish, and publishing your collection) every time you publish a new collection or a new version of an existing collection.
1. Create a namespace on each distribution server you want to use.
2. Get an API token for each distribution server you want to use.
3. Specify the API token for each distribution server you want to use.
.. _get_namespace:
Creating a namespace
--------------------
You must upload your collection into a namespace on each distribution server. If you have a login for Ansible Galaxy, your Ansible Galaxy username is usually also an Ansible Galaxy namespace.
.. warning::
Namespaces on Ansible Galaxy cannot include hyphens. If you have a login for Ansible Galaxy that includes a hyphen, your Galaxy username is not also a Galaxy namespace. For example, ``awesome-user`` is a valid username for Ansible Galaxy, but it is not a valid namespace.
You can create additional namespaces on Ansible Galaxy if you choose. For Red Hat Automation Hub and private Automation Hub you must create a namespace before you can upload your collection. To create a namespace:
* To create a namespace on Galaxy, see `Galaxy namespaces <https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespaces>`_ on the Galaxy docsite for details.
* To create a namespace on Red Hat Automation Hub, see the `Ansible Certified Content FAQ <https://access.redhat.com/articles/4916901>`_.
Specify the namespace in the :file:`galaxy.yml` file for each collection. For more information on the :file:`galaxy.yml` file, see :ref:`collections_galaxy_meta`.
.. _galaxy_get_token:
Getting your API token
----------------------
An API token authenticates your connection to each distribution server. You need a separate API token for each distribution server. Use the correct API token to connect to each distribution server securely and protect your content.
To get your API token:
* To get an API token for Galaxy, go to the `Galaxy profile preferences <https://galaxy.ansible.com/me/preferences>`_ page and click :guilabel:`API Key`.
* To get an API token for Automation Hub, go to `the token page <https://cloud.redhat.com/ansible/automation-hub/token/>`_ and click :guilabel:`Load token`.
.. _galaxy_specify_token:
Specifying your API token and distribution server
-------------------------------------------------
Each time you publish a collection, you must specify the API token and the distribution server to create a secure connection. You have two options for specifying the token and distribution server:
* You can configure the token in configuration, as part of a ``galaxy_server_list`` entry in your :file:`ansible.cfg` file. Using configuration is the most secure option.
* You can pass the token at the command line as an argument to the ``ansible-galaxy`` command. If you pass the token at the command line, you can specify the server at the command line, by using the default setting, or by setting the server in configuration. Passing the token at the command line is insecure, because typing secrets at the command line may expose them to other users on the system.
.. _galaxy_token_ansible_cfg:
Specifying the token and distribution server in configuration
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default, Ansible Galaxy is configured as the only distribution server. You can add other distribution servers and specify your API token or tokens in configuration by editing the ``galaxy_server_list`` section of your :file:`ansible.cfg` file. This is the most secure way to manage authentication for distribution servers. Specify a URL and token for each server. For example:
.. code-block:: ini
[galaxy]
server_list = release_galaxy
[galaxy_server.release_galaxy]
url=https://galaxy.ansible.com/
token=abcdefghijklmnopqrtuvwxyz
You cannot use ``apt-key`` with any servers defined in your :ref:`galaxy_server_list <galaxy_server_config>`. See :ref:`galaxy_server_config` for complete details.
.. _galaxy_use_token_arg:
Specifying the token at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
You can specify the API token at the command line using the ``--token`` argument of the :ref:`ansible-galaxy` command. There are three ways to specify the distribution server when passing the token at the command line:
* using the ``--server`` argument of the :ref:`ansible-galaxy` command
* relying on the default (https://galaxy.ansible.com)
* setting a server in configuration by creating a :ref:`GALAXY_SERVER` setting in your :file:`ansible.cfg` file
For example:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz --token abcdefghijklmnopqrtuvwxyz
.. warning::
Using the ``--token`` argument is insecure. Passing secrets at the command line may expose them to others on the system.
.. _building_collections:
Building your collection tarball
================================
After configuring one or more distribution servers, build a collection tarball. The collection tarball is the published artifact, the object that you upload and other users download to install your collection. To build a collection tarball:
#. Review the version number in your :file:`galaxy.yml` file. Each time you publish your collection, it must have a new version number. You cannot make changes to existing versions of your collection on a distribution server. If you try to upload the same collection version more than once, the distribution server returns the error ``Code: conflict.collection_exists``. Collections follow semantic versioning rules. For more information on versions, see :ref:`collection_versions`. For more information on the :file:`galaxy.yml` file, see :ref:`collections_galaxy_meta`.
#. Run ``ansible-galaxy collection build`` from inside the top-level directory of the collection. For example:
.. code-block:: bash
collection_dir#> ansible-galaxy collection build
This command builds a tarball of the collection in the current directory, which you can upload to your selected distribution server::
my_collection/
├── galaxy.yml
├── ...
├── my_namespace-my_collection-1.0.0.tar.gz
└── ...
.. note::
* To reduce the size of collections, certain files and folders are excluded from the collection tarball by default. See :ref:`ignoring_files_and_folders_collections` if your collection directory contains other files you want to exclude.
* The current Galaxy maximum tarball size is 2 MB.
You can upload your tarball to one or more distribution servers. You can also distribute your collection locally by copying the tarball to install your collection directly on target systems.
.. _ignoring_files_and_folders_collections:
Ignoring files and folders
--------------------------
You can exclude files from your collection with either :ref:`build_ignore <build_ignore>` or :ref:`manifest_directives`. For more information on the :file:`galaxy.yml` file, see :ref:`collections_galaxy_meta`.
.. _build_ignore:
Include all, with explicit ignores
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
By default the build step includes all the files in the collection directory in the tarball except for the following:
* ``galaxy.yml``
* ``*.pyc``
* ``*.retry``
* ``tests/output``
* previously built tarballs in the root directory
* various version control directories such as ``.git/``
To exclude other files and folders from your collection tarball, set a list of file glob-like patterns in the ``build_ignore`` key in the collection's ``galaxy.yml`` file. These patterns use the following special characters for wildcard matching:
* ``*``: Matches everything
* ``?``: Matches any single character
* ``[seq]``: Matches any character in sequence
* ``[!seq]``:Matches any character not in sequence
For example, to exclude the :file:`sensitive` folder within the ``playbooks`` folder as well any ``.tar.gz`` archives, set the following in your ``galaxy.yml`` file:
.. code-block:: yaml
build_ignore:
- playbooks/sensitive
- '*.tar.gz'
.. note::
The ``build_ignore`` feature is only supported with ``ansible-galaxy collection build`` in Ansible 2.10 or newer.
.. _manifest_directives:
Manifest Directives
^^^^^^^^^^^^^^^^^^^
.. versionadded:: 2.14
The :file:`galaxy.yml` file supports manifest directives that are historically used in Python packaging, as described in `MANIFEST.in commands <https://packaging.python.org/en/latest/guides/using-manifest-in/#manifest-in-commands>`_.
.. note::
The use of ``manifest`` requires installing the optional ``distlib`` Python dependency.
.. note::
The ``manifest`` feature is only supported with ``ansible-galaxy collection build`` in ``ansible-core`` 2.14 or newer, and is mutually exclusive with ``build_ignore``.
For example, to exclude the :file:`sensitive` folder within the ``playbooks`` folder as well as any ``.tar.gz`` archives, set the following in your :file:`galaxy.yml` file:
.. code-block:: yaml
manifest:
directives:
- recursive-exclude playbooks/sensitive **
- global-exclude *.tar.gz
By default, the ``MANIFEST.in`` style directives would exclude all files by default, but there are default directives in place. Those default directives are described below. To see the directives in use during build, pass ``-vvv`` with the ``ansible-galaxy collection build`` command.
.. code-block::
include meta/*.yml
include *.txt *.md *.rst COPYING LICENSE
recursive-include tests **
recursive-include docs **.rst **.yml **.yaml **.json **.j2 **.txt
recursive-include roles **.yml **.yaml **.json **.j2
recursive-include playbooks **.yml **.yaml **.json
recursive-include changelogs **.yml **.yaml
recursive-include plugins */**.py
recursive-include plugins/become **.yml **.yaml
recursive-include plugins/cache **.yml **.yaml
recursive-include plugins/callback **.yml **.yaml
recursive-include plugins/cliconf **.yml **.yaml
recursive-include plugins/connection **.yml **.yaml
recursive-include plugins/filter **.yml **.yaml
recursive-include plugins/httpapi **.yml **.yaml
recursive-include plugins/inventory **.yml **.yaml
recursive-include plugins/lookup **.yml **.yaml
recursive-include plugins/netconf **.yml **.yaml
recursive-include plugins/shell **.yml **.yaml
recursive-include plugins/strategy **.yml **.yaml
recursive-include plugins/test **.yml **.yaml
recursive-include plugins/vars **.yml **.yaml
recursive-include plugins/modules **.ps1 **.yml **.yaml
recursive-include plugins/module_utils **.ps1 **.psm1 **.cs
# manifest.directives from galaxy.yml inserted here
exclude galaxy.yml galaxy.yaml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
recursive-exclude tests/output **
global-exclude /.* /__pycache__
.. note::
``<namespace>-<name>-*.tar.gz`` is expanded with the actual ``namespace`` and ``name``.
The ``manifest.directives`` supplied in :file:`galaxy.yml` are inserted after the default includes and before the default excludes.
To enable the use of manifest directives without supplying your own, insert either ``manifest: {}`` or ``manifest: null`` in the :file:`galaxy.yml` file and remove any use of ``build_ignore``.
If the default manifest directives do not meet your needs, you can set ``manifest.omit_default_directives`` to a value of ``true`` in :file:`galaxy.yml`. You then must specify a full compliment of manifest directives in :file:`galaxy.yml`. The defaults documented above are a good starting point.
Below is an example where the default directives are not included.
.. code-block:: yaml
manifest:
directives:
- include meta/runtime.yml
- include README.md LICENSE
- recursive-include plugins */**.py
- exclude galaxy.yml MANIFEST.json FILES.json <namespace>-<name>-*.tar.gz
- recursive-exclude tests/output **
omit_default_directives: true
.. _signing_collections:
Signing a collection
--------------------------
You can include a GnuPG signature with your collection on a :term:`Pulp 3 Galaxy` server. See `Enabling collection signing <https://galaxyng.netlify.app/config/collection_signing/>`_ for details.
You can manually generate detached signatures for a collection using the ``gpg`` CLI using the following step. This step assume you have generated a GPG private key, but do not cover this process.
.. code-block:: bash
ansible-galaxy collection build
tar -Oxzf namespace-name-1.0.0.tar.gz MANIFEST.json | gpg --output namespace-name-1.0.0.asc --detach-sign --armor --local-user [email protected] -
.. _trying_collection_locally:
Preparing to publish your collection
====================================
Each time you publish your collection, you must create a :ref:`new version <collection_versions>` on the distribution server. After you publish a version of a collection, you cannot delete or modify that version. To avoid unnecessary extra versions, check your collection for bugs, typos, and other issues locally before publishing:
#. Install the collection locally.
#. Review the locally installed collection before publishing a new version.
Installing your collection locally
----------------------------------
You have two options for installing your collection locally:
* Install your collection locally from the tarball.
* Install your collection locally from your git repository.
Installing your collection locally from the tarball
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install your collection locally from the tarball, run ``ansible-galaxy collection install`` and specify the collection tarball. You can optionally specify a location using the ``-p`` flag. For example:
.. code-block:: bash
collection_dir#> ansible-galaxy collection install my_namespace-my_collection-1.0.0.tar.gz -p ./collections
Install the tarball into a directory configured in :ref:`COLLECTIONS_PATHS` so Ansible can easily find and load the collection. If you do not specify a path value, ``ansible-galaxy collection install`` installs the collection in the first path defined in :ref:`COLLECTIONS_PATHS`.
.. _collections_scm_install:
Installing your collection locally from a git repository
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To install your collection locally from a git repository, specify the repository and the branch you want to install:
.. code-block:: bash
collection_dir#> ansible-galaxy collection install git+https://github.com/org/repo.git,devel
.. include:: ../shared_snippets/installing_collections_git_repo.txt
Reviewing your collection
-------------------------
Review the collection:
* Run a playbook that uses the modules and plugins in your collection. Verify that new features and functionality work as expected. For examples and more details see :ref:`Using collections <using_collections>`.
* Check the documentation for typos.
* Check that the version number of your tarball is higher than the latest published version on the distribution server or servers.
* If you find any issues, fix them and rebuild the collection tarball.
.. _collection_versions:
Understanding collection versioning
-----------------------------------
The only way to change a collection is to release a new version. The latest version of a collection (by highest version number) is the version displayed everywhere in Galaxy and Automation Hub. Users can still download older versions.
Follow semantic versioning when setting the version for your collection. In summary:
* Increment the major version number, ``x`` of ``x.y.z``, for an incompatible API change.
* Increment the minor version number, ``y`` of ``x.y.z``, for new functionality in a backwards compatible manner (for example new modules/plugins, parameters, return values).
* Increment the patch version number, ``z`` of ``x.y.z``, for backwards compatible bug fixes.
Read the official `Semantic Versioning <https://semver.org/>`_ documentation for details and examples.
.. _publish_collection:
Publishing your collection
==========================
The last step in distributing your collection is publishing the tarball to Ansible Galaxy, Red Hat Automation Hub, or a privately hosted Automation Hub instance. You can publish your collection in two ways:
* from the command line using the ``ansible-galaxy collection publish`` command
* from the website of the distribution server (Galaxy, Automation Hub) itself
.. _upload_collection_ansible_galaxy:
.. _publish_collection_galaxy_cmd:
Publishing a collection from the command line
---------------------------------------------
To upload the collection tarball from the command line using ``ansible-galaxy``:
.. code-block:: bash
ansible-galaxy collection publish path/to/my_namespace-my_collection-1.0.0.tar.gz
.. note::
This ansible-galaxy command assumes you have retrieved and stored your API token in configuration. See :ref:`galaxy_specify_token` for details.
The ``ansible-galaxy collection publish`` command triggers an import process, just as if you uploaded the collection through the Galaxy website. The command waits until the import process completes before reporting the status back. If you want to continue without waiting for the import result, use the ``--no-wait`` argument and manually look at the import progress in your `My Imports <https://galaxy.ansible.com/my-imports/>`_ page.
.. _upload_collection_galaxy:
Publishing a collection from the website
----------------------------------------
To publish your collection directly on the Galaxy website:
#. Go to the `My Content <https://galaxy.ansible.com/my-content/namespaces>`_ page, and click the **Add Content** button on one of your namespaces.
#. From the **Add Content** dialogue, click **Upload New Collection**, and select the collection archive file from your local filesystem.
When you upload a collection, Ansible always uploads the tarball to the namespace specified in the collection metadata in the ``galaxy.yml`` file, no matter which namespace you select on the website. If you are not an owner of the namespace specified in your collection metadata, the upload request fails.
After Galaxy uploads and accepts a collection, the website shows you the **My Imports** page. This page shows import process information. You can review any errors or warnings about your upload there.
.. seealso::
:ref:`collections`
Learn how to install and use collections.
:ref:`collections_galaxy_meta`
Table of fields used in the :file:`galaxy.yml` file
`Mailing List <https://groups.google.com/group/ansible-devel>`_
The development mailing list
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,964 |
Docs: Add code-block wrappers to code examples: developing_modules_general_windows.rst.
|
### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
Translation programs then attempt to translate this code, which we don't want.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the `developing_modules_general_windows.rst ` file in the Developer Guide (`docs/docsite/rst/dev_guide`), there are 2 instances of lead-in sentences ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . | grep developing_modules_general_windows.rst
```
**Example:**
Before:
```
Before running ``ansible-playbook``, run the following command to enable logging::
export ANSIBLE_LOG_PATH=~/ansible.log
```
After:
```
Before running ``ansible-playbook``, run the following command to enable logging:
.. code-block:: shell
export ANSIBLE_LOG_PATH=~/ansible.log
```
This problem has been addressed in some other guides; view these merged PRs to help get you started:
- Network Guide: [#75850](https://github.com/ansible/ansible/pull/75850/files)
- Developer Guide: [#75849](https://github.com/ansible/ansible/pull/75849/files)
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev-guide/developing_modules_general_windows.rst
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a code-block element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78964
|
https://github.com/ansible/ansible/pull/78985
|
1b922b42dd5e18aeff789f2ee6fcb0a43485ac12
|
56c48d1c4507754be9bb1b557ed6681306492180
| 2022-10-01T12:47:13Z |
python
| 2022-10-03T08:51:14Z |
docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
|
.. _developing_modules_general_windows:
**************************************
Windows module development walkthrough
**************************************
In this section, we will walk through developing, testing, and debugging an
Ansible Windows module.
Because Windows modules are written in Powershell and need to be run on a
Windows host, this guide differs from the usual development walkthrough guide.
What's covered in this section:
.. contents::
:local:
Windows environment setup
=========================
Unlike Python module development which can be run on the host that runs
Ansible, Windows modules need to be written and tested for Windows hosts.
While evaluation editions of Windows can be downloaded from
Microsoft, these images are usually not ready to be used by Ansible without
further modification. The easiest way to set up a Windows host so that it is
ready to by used by Ansible is to set up a virtual machine using Vagrant.
Vagrant can be used to download existing OS images called *boxes* that are then
deployed to a hypervisor like VirtualBox. These boxes can either be created and
stored offline or they can be downloaded from a central repository called
Vagrant Cloud.
This guide will use the Vagrant boxes created by the `packer-windoze <https://github.com/jborean93/packer-windoze>`_
repository which have also been uploaded to `Vagrant Cloud <https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=jborean93>`_.
To find out more info on how these images are created, please go to the GitHub
repo and look at the ``README`` file.
Before you can get started, the following programs must be installed (please consult the Vagrant and
VirtualBox documentation for installation instructions):
- Vagrant
- VirtualBox
Create a Windows server in a VM
===============================
To create a single Windows Server 2016 instance, run the following:
.. code-block:: shell
vagrant init jborean93/WindowsServer2016
vagrant up
This will download the Vagrant box from Vagrant Cloud and add it to the local
boxes on your host and then start up that instance in VirtualBox. When starting
for the first time, the Windows VM will run through the sysprep process and
then create a HTTP and HTTPS WinRM listener automatically. Vagrant will finish
its process once the listeners are online, after which the VM can be used by Ansible.
Create an Ansible inventory
===========================
The following Ansible inventory file can be used to connect to the newly
created Windows VM:
.. code-block:: ini
[windows]
WindowsServer ansible_host=127.0.0.1
[windows:vars]
ansible_user=vagrant
ansible_password=vagrant
ansible_port=55986
ansible_connection=winrm
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
.. note:: The port ``55986`` is automatically forwarded by Vagrant to the
Windows host that was created, if this conflicts with an existing local
port then Vagrant will automatically use another one at random and display
show that in the output.
The OS that is created is based on the image set. The following
images can be used:
- `jborean93/WindowsServer2012 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012>`_
- `jborean93/WindowsServer2012R2 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012R2>`_
- `jborean93/WindowsServer2016 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2016>`_
- `jborean93/WindowsServer2019 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2019>`_
- `jborean93/WindowsServer2022 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2022>`_
When the host is online, it can accessible by RDP on ``127.0.0.1:3389`` but the
port may differ depending if there was a conflict. To get rid of the host, run
``vagrant destroy --force`` and Vagrant will automatically remove the VM and
any other files associated with that VM.
While this is useful when testing modules on a single Windows instance, these
host won't work without modification with domain based modules. The Vagrantfile
at `ansible-windows <https://github.com/jborean93/ansible-windows/tree/master/vagrant>`_
can be used to create a test domain environment to be used in Ansible. This
repo contains three files which are used by both Ansible and Vagrant to create
multiple Windows hosts in a domain environment. These files are:
- ``Vagrantfile``: The Vagrant file that reads the inventory setup of ``inventory.yml`` and provisions the hosts that are required
- ``inventory.yml``: Contains the hosts that are required and other connection information such as IP addresses and forwarded ports
- ``main.yml``: Ansible playbook called by Vagrant to provision the domain controller and join the child hosts to the domain
By default, these files will create the following environment:
- A single domain controller running on Windows Server 2016
- Five child hosts for each major Windows Server version joined to that domain
- A domain with the DNS name ``domain.local``
- A local administrator account on each host with the username ``vagrant`` and password ``vagrant``
- A domain admin account ``[email protected]`` with the password ``VagrantPass1``
The domain name and accounts can be modified by changing the variables
``domain_*`` in the ``inventory.yml`` file if it is required. The inventory
file can also be modified to provision more or less servers by changing the
hosts that are defined under the ``domain_children`` key. The host variable
``ansible_host`` is the private IP that will be assigned to the VirtualBox host
only network adapter while ``vagrant_box`` is the box that will be used to
create the VM.
Provisioning the environment
============================
To provision the environment as is, run the following:
.. code-block:: shell
git clone https://github.com/jborean93/ansible-windows.git
cd vagrant
vagrant up
.. note:: Vagrant provisions each host sequentially so this can take some time
to complete. If any errors occur during the Ansible phase of setting up the
domain, run ``vagrant provision`` to rerun just that step.
Unlike setting up a single Windows instance with Vagrant, these hosts can also
be accessed using the IP address directly as well as through the forwarded
ports. It is easier to access it over the host only network adapter as the
normal protocol ports are used, for example RDP is still over ``3389``. In cases where
the host cannot be resolved using the host only network IP, the following
protocols can be access over ``127.0.0.1`` using these forwarded ports:
- ``RDP``: 295xx
- ``SSH``: 296xx
- ``WinRM HTTP``: 297xx
- ``WinRM HTTPS``: 298xx
- ``SMB``: 299xx
Replace ``xx`` with the entry number in the inventory file where the domain
controller started with ``00`` and is incremented from there. For example, in
the default ``inventory.yml`` file, WinRM over HTTPS for ``SERVER2012R2`` is
forwarded over port ``29804`` as it's the fourth entry in ``domain_children``.
Windows new module development
==============================
When creating a new module there are a few things to keep in mind:
- Module code is in Powershell (.ps1) files while the documentation is contained in Python (.py) files of the same name
- Avoid using ``Write-Host/Debug/Verbose/Error`` in the module and add what needs to be returned to the ``$module.Result`` variable
- To fail a module, call ``$module.FailJson("failure message here")``, an Exception or ErrorRecord can be set to the second argument for a more descriptive error message
- You can pass in the exception or ErrorRecord as a second argument to ``FailJson("failure", $_)`` to get a more detailed output
- Most new modules require check mode and integration tests before they are merged into the main Ansible codebase
- Avoid using try/catch statements over a large code block, rather use them for individual calls so the error message can be more descriptive
- Try and catch specific exceptions when using try/catch statements
- Avoid using PSCustomObjects unless necessary
- Look for common functions in ``./lib/ansible/module_utils/powershell/`` and use the code there instead of duplicating work. These can be imported by adding the line ``#Requires -Module *`` where * is the filename to import, and will be automatically included with the module code sent to the Windows target when run via Ansible
- As well as PowerShell module utils, C# module utils are stored in ``./lib/ansible/module_utils/csharp/`` and are automatically imported in a module execution if the line ``#AnsibleRequires -CSharpUtil *`` is present
- C# and PowerShell module utils achieve the same goal but C# allows a developer to implement low level tasks, such as calling the Win32 API, and can be faster in some cases
- Ensure the code runs under Powershell v3 and higher on Windows Server 2012 and higher; if higher minimum Powershell or OS versions are required, ensure the documentation reflects this clearly
- Ansible runs modules under strictmode version 2.0. Be sure to test with that enabled by putting ``Set-StrictMode -Version 2.0`` at the top of your dev script
- Favor native Powershell cmdlets over executable calls if possible
- Use the full cmdlet name instead of aliases, for example ``Remove-Item`` over ``rm``
- Use named parameters with cmdlets, for example ``Remove-Item -Path C:\temp`` over ``Remove-Item C:\temp``
A very basic Powershell module `win_environment <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_environment.ps1>`_ incorporates best practices for Powershell modules. It demonstrates how to implement check-mode and diff-support, and also shows a warning to the user when a specific condition is met.
A slightly more advanced module is `win_uri <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_uri.ps1>`_ which additionally shows how to use different parameter types (bool, str, int, list, dict, path) and a selection of choices for parameters, how to fail a module and how to handle exceptions.
As part of the new ``AnsibleModule`` wrapper, the input parameters are defined and validated based on an argument
spec. The following options can be set at the root level of the argument spec:
- ``mutually_exclusive``: A list of lists, where the inner list contains module options that cannot be set together
- ``no_log``: Stops the module from emitting any logs to the Windows Event log
- ``options``: A dictionary where the key is the module option and the value is the spec for that option
- ``required_by``: A dictionary where the option(s) specified by the value must be set if the option specified by the key is also set
- ``required_if``: A list of lists where the inner list contains 3 or 4 elements;
* The first element is the module option to check the value against
* The second element is the value of the option specified by the first element, if matched then the required if check is run
* The third element is a list of required module options when the above is matched
* An optional fourth element is a boolean that states whether all module options in the third elements are required (default: ``$false``) or only one (``$true``)
- ``required_one_of``: A list of lists, where the inner list contains module options where at least one must be set
- ``required_together``: A list of lists, where the inner list contains module options that must be set together
- ``supports_check_mode``: Whether the module supports check mode, by default this is ``$false``
The actual input options for a module are set within the ``options`` value as a dictionary. The keys of this dictionary
are the module option names while the values are the spec of that module option. Each spec can have the following
options set:
- ``aliases``: A list of aliases for the module option
- ``choices``: A list of valid values for the module option, if ``type=list`` then each list value is validated against the choices and not the list itself
- ``default``: The default value for the module option if not set
- ``deprecated_aliases``: A list of hashtables that define aliases that are deprecated and the versions they will be removed in. Each entry must contain the keys ``name`` and ``collection_name`` with either ``version`` or ``date``
- ``elements``: When ``type=list``, this sets the type of each list value, the values are the same as ``type``
- ``no_log``: Will sanitise the input value before being returned in the ``module_invocation`` return value
- ``removed_in_version``: States when a deprecated module option is to be removed, a warning is displayed to the end user if set
- ``removed_at_date``: States the date (YYYY-MM-DD) when a deprecated module option will be removed, a warning is displayed to the end user if set
- ``removed_from_collection``: States from which collection the deprecated module option will be removed; must be specified if one of ``removed_in_version`` and ``removed_at_date`` is specified
- ``required``: Will fail when the module option is not set
- ``type``: The type of the module option, if not set then it defaults to ``str``. The valid types are;
* ``bool``: A boolean value
* ``dict``: A dictionary value, if the input is a JSON or key=value string then it is converted to dictionary
* ``float``: A float or `Single <https://docs.microsoft.com/en-us/dotnet/api/system.single?view=netframework-4.7.2>`_ value
* ``int``: An Int32 value
* ``json``: A string where the value is converted to a JSON string if the input is a dictionary
* ``list``: A list of values, ``elements=<type>`` can convert the individual list value types if set. If ``elements=dict`` then ``options`` is defined, the values will be validated against the argument spec. When the input is a string then the string is split by ``,`` and any whitespace is trimmed
* ``path``: A string where values likes ``%TEMP%`` are expanded based on environment values. If the input value starts with ``\\?\`` then no expansion is run
* ``raw``: No conversions occur on the value passed in by Ansible
* ``sid``: Will convert Windows security identifier values or Windows account names to a `SecurityIdentifier <https://docs.microsoft.com/en-us/dotnet/api/system.security.principal.securityidentifier?view=netframework-4.7.2>`_ value
* ``str``: The value is converted to a string
When ``type=dict``, or ``type=list`` and ``elements=dict``, the following keys can also be set for that module option:
- ``apply_defaults``: The value is based on the ``options`` spec defaults for that key if ``True`` and null if ``False``. Only valid when the module option is not defined by the user and ``type=dict``.
- ``mutually_exclusive``: Same as the root level ``mutually_exclusive`` but validated against the values in the sub dict
- ``options``: Same as the root level ``options`` but contains the valid options for the sub option
- ``required_if``: Same as the root level ``required_if`` but validated against the values in the sub dict
- ``required_by``: Same as the root level ``required_by`` but validated against the values in the sub dict
- ``required_together``: Same as the root level ``required_together`` but validated against the values in the sub dict
- ``required_one_of``: Same as the root level ``required_one_of`` but validated against the values in the sub dict
A module type can also be a delegate function that converts the value to whatever is required by the module option. For
example the following snippet shows how to create a custom type that creates a ``UInt64`` value:
.. code-block:: powershell
$spec = @{
uint64_type = @{ type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) } }
}
$uint64_type = $module.Params.uint64_type
When in doubt, look at some of the other core modules and see how things have been
implemented there.
Sometimes there are multiple ways that Windows offers to complete a task; this
is the order to favor when writing modules:
- Native Powershell cmdlets like ``Remove-Item -Path C:\temp -Recurse``
- .NET classes like ``[System.IO.Path]::GetRandomFileName()``
- WMI objects through the ``New-CimInstance`` cmdlet
- COM objects through ``New-Object -ComObject`` cmdlet
- Calls to native executables like ``Secedit.exe``
PowerShell modules support a small subset of the ``#Requires`` options built
into PowerShell as well as some Ansible-specific requirements specified by
``#AnsibleRequires``. These statements can be placed at any point in the script,
but are most commonly near the top. They are used to make it easier to state the
requirements of the module without writing any of the checks. Each ``requires``
statement must be on its own line, but there can be multiple requires statements
in one script.
These are the checks that can be used within Ansible modules:
- ``#Requires -Module Ansible.ModuleUtils.<module_util>``: Added in Ansible 2.4, specifies a module_util to load in for the module execution.
- ``#Requires -Version x.y``: Added in Ansible 2.5, specifies the version of PowerShell that is required by the module. The module will fail if this requirement is not met.
- ``#AnsibleRequires -PowerShell <module_util>``: Added in Ansible 2.8, like ``#Requires -Module``, this specifies a module_util to load in for module execution.
- ``#AnsibleRequires -CSharpUtil <module_util>``: Added in Ansible 2.8, specifies a C# module_util to load in for the module execution.
- ``#AnsibleRequires -OSVersion x.y``: Added in Ansible 2.5, specifies the OS build version that is required by the module and will fail if this requirement is not met. The actual OS version is derived from ``[Environment]::OSVersion.Version``.
- ``#AnsibleRequires -Become``: Added in Ansible 2.5, forces the exec runner to run the module with ``become``, which is primarily used to bypass WinRM restrictions. If ``ansible_become_user`` is not specified then the ``SYSTEM`` account is used instead.
The ``#AnsibleRequires -PowerShell`` and ``#AnsibleRequires -CSharpUtil``
support further features such as:
- Importing a util contained in a collection (added in Ansible 2.9)
- Importing a util by relative names (added in Ansible 2.10)
- Specifying the util is optional by adding `-Optional` to the import
declaration (added in Ansible 2.12).
See the below examples for more details:
.. code-block:: powershell
# Imports the PowerShell Ansible.ModuleUtils.Legacy provided by Ansible itself
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Legacy
# Imports the PowerShell my_util in the my_namesapce.my_name collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the PowerShell my_util that exists in the same collection as the current module
#AnsibleRequires -PowerShell ..module_utils.my_util
# Imports the PowerShell Ansible.ModuleUtils.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Optional -Optional
# Imports the C# Ansible.Process provided by Ansible itself
#AnsibleRequires -CSharpUtil Ansible.Process
# Imports the C# my_util in the my_namespace.my_name collection
#AnsibleRequires -CSharpUtil ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the C# my_util that exists in the same collection as the current module
#AnsibleRequires -CSharpUtil ..module_utils.my_util
# Imports the C# Ansible.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -CSharpUtil Ansible.Optional -Optional
For optional require statements, it is up to the module code to then verify
whether the util has been imported before trying to use it. This can be done by
checking if a function or type provided by the util exists or not.
While both ``#Requires -Module`` and ``#AnsibleRequires -PowerShell`` can be
used to load a PowerShell module it is recommended to use ``#AnsibleRequires``.
This is because ``#AnsibleRequires`` supports collection module utils, imports
by relative util names, and optional util imports.
C# module utils can reference other C# utils by adding the line
``using Ansible.<module_util>;`` to the top of the script with all the other
using statements.
Windows module utilities
========================
Like Python modules, PowerShell modules also provide a number of module
utilities that provide helper functions within PowerShell. These module_utils
can be imported by adding the following line to a PowerShell module:
.. code-block:: powershell
#Requires -Module Ansible.ModuleUtils.Legacy
This will import the module_util at ``./lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1``
and enable calling all of its functions. As of Ansible 2.8, Windows module
utils can also be written in C# and stored at ``lib/ansible/module_utils/csharp``.
These module_utils can be imported by adding the following line to a PowerShell
module:
.. code-block:: powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
This will import the module_util at ``./lib/ansible/module_utils/csharp/Ansible.Basic.cs``
and automatically load the types in the executing process. C# module utils can
reference each other and be loaded together by adding the following line to the
using statements at the top of the util:
.. code-block:: csharp
using Ansible.Become;
There are special comments that can be set in a C# file for controlling the
compilation parameters. The following comments can be added to the script;
- ``//AssemblyReference -Name <assembly dll> [-CLR [Core|Framework]]``: The assembly DLL to reference during compilation, the optional ``-CLR`` flag can also be used to state whether to reference when running under .NET Core, Framework, or both (if omitted)
- ``//NoWarn -Name <error id> [-CLR [Core|Framework]]``: A compiler warning ID to ignore when compiling the code, the optional ``-CLR`` works the same as above. A list of warnings can be found at `Compiler errors <https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/index>`_
As well as this, the following pre-processor symbols are defined;
- ``CORECLR``: This symbol is present when PowerShell is running through .NET Core
- ``WINDOWS``: This symbol is present when PowerShell is running on Windows
- ``UNIX``: This symbol is present when PowerShell is running on Unix
A combination of these flags help to make a module util interoperable on both
.NET Framework and .NET Core, here is an example of them in action:
.. code-block:: csharp
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
//AssemblyReference -Name Newtonsoft.Json.dll -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
// Ignore error CS1702 for all .NET types
//NoWarn -Name CS1702
// Ignore error CS1956 only for .NET Framework
//NoWarn -Name CS1956 -CLR Framework
The following is a list of module_utils that are packaged with Ansible and a general description of what
they do:
- ArgvParser: Utility used to convert a list of arguments to an escaped string compliant with the Windows argument parsing rules.
- CamelConversion: Utility used to convert camelCase strings/lists/dicts to snake_case.
- CommandUtil: Utility used to execute a Windows process and return the stdout/stderr and rc as separate objects.
- FileUtil: Utility that expands on the ``Get-ChildItem`` and ``Test-Path`` to work with special files like ``C:\pagefile.sys``.
- Legacy: General definitions and helper utilities for Ansible module.
- LinkUtil: Utility to create, remove, and get information about symbolic links, junction points and hard inks.
- SID: Utilities used to convert a user or group to a Windows SID and vice versa.
For more details on any specific module utility and their requirements, please see the `Ansible
module utilities source code <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils/powershell>`_.
PowerShell module utilities can be stored outside of the standard Ansible
distribution for use with custom modules. Custom module_utils are placed in a
folder called ``module_utils`` located in the root folder of the playbook or role
directory.
C# module utilities can also be stored outside of the standard Ansible distribution for use with custom modules. Like
PowerShell utils, these are stored in a folder called ``module_utils`` and the filename must end in the extension
``.cs``, start with ``Ansible.`` and be named after the namespace defined in the util.
The below example is a role structure that contains two PowerShell custom module_utils called
``Ansible.ModuleUtils.ModuleUtil1``, ``Ansible.ModuleUtils.ModuleUtil2``, and a C# util containing the namespace
``Ansible.CustomUtil``::
meta/
main.yml
defaults/
main.yml
module_utils/
Ansible.ModuleUtils.ModuleUtil1.psm1
Ansible.ModuleUtils.ModuleUtil2.psm1
Ansible.CustomUtil.cs
tasks/
main.yml
Each PowerShell module_util must contain at least one function that has been exported with ``Export-ModuleMember``
at the end of the file. For example
.. code-block:: powershell
Export-ModuleMember -Function Invoke-CustomUtil, Get-CustomInfo
Exposing shared module options
++++++++++++++++++++++++++++++
PowerShell module utils can easily expose common module options that a module can use when building its argument spec.
This allows common features to be stored and maintained in one location and have those features used by multiple
modules with minimal effort. Any new features or bugfixes added to one of these utils are then automatically used by
the various modules that call that util.
An example of this would be to have a module util that handles authentication and communication against an API This
util can be used by multiple modules to expose a common set of module options like the API endpoint, username,
password, timeout, cert validation, and so on without having to add those options to each module spec.
The standard convention for a module util that has a shared argument spec would have
- A ``Get-<namespace.name.util name>Spec`` function that outputs the common spec for a module
* It is highly recommended to make this function name be unique to the module to avoid any conflicts with other utils that can be loaded
* The format of the output spec is a Hashtable in the same format as the ``$spec`` used for normal modules
- A function that takes in an ``AnsibleModule`` object called under the ``-Module`` parameter which it can use to get the shared options
Because these options can be shared across various module it is highly recommended to keep the module option names and
aliases in the shared spec as specific as they can be. For example do not have a util option called ``password``,
rather you should prefix it with a unique name like ``acme_password``.
.. warning::
Failure to have a unique option name or alias can prevent the util being used by module that also use those names or
aliases for its own options.
The following is an example module util called ``ServiceAuth.psm1`` in a collection that implements a common way for
modules to authentication with a service.
.. code-block:: powershell
Invoke-MyServiceResource {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[String]
$ResourceId,
[String]
$State = 'present'
)
# Process the common module options known to the util
$params = @{
ServerUri = $Module.Params.my_service_url
}
if ($Module.Params.my_service_username) {
$params.Credential = Get-MyServiceCredential
}
if ($State -eq 'absent') {
Remove-MyService @params -ResourceId $ResourceId
} else {
New-MyService @params -ResourceId $ResourceId
}
}
Get-MyNamespaceMyCollectionServiceAuthSpec {
# Output the util spec
@{
options = @{
my_service_url = @{ type = 'str'; required = $true }
my_service_username = @{ type = 'str' }
my_service_password = @{ type = 'str'; no_log = $true }
}
required_together = @(
,@('my_service_username', 'my_service_password')
)
}
}
$exportMembers = @{
Function = 'Get-MyNamespaceMyCollectionServiceAuthSpec', 'Invoke-MyServiceResource'
}
Export-ModuleMember @exportMembers
For a module to take advantage of this common argument spec it can be set out like
.. code-block:: powershell
#!powershell
# Include the module util ServiceAuth.psm1 from the my_namespace.my_collection collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_collection.plugins.module_utils.ServiceAuth
# Create the module spec like normal
$spec = @{
options = @{
resource_id = @{ type = 'str'; required = $true }
state = @{ type = 'str'; choices = 'absent', 'present' }
}
}
# Create the module from the module spec but also include the util spec to merge into our own.
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-MyNamespaceMyCollectionServiceAuthSpec))
# Call the ServiceAuth module util and pass in the module object so it can access the module options.
Invoke-MyServiceResource -Module $module -ResourceId $module.Params.resource_id -State $module.params.state
$module.ExitJson()
.. note::
Options defined in the module spec will always have precedence over a util spec. Any list values under the same key
in a util spec will be appended to the module spec for that same key. Dictionary values will add any keys that are
missing from the module spec and merge any values that are lists or dictionaries. This is similar to how the doc
fragment plugins work when extending module documentation.
To document these shared util options for a module, create a doc fragment plugin that documents the options implemented
by the module util and extend the module docs for every module that implements the util to include that fragment in
its docs.
Windows playbook module testing
===============================
You can test a module with an Ansible playbook. For example:
- Create a playbook in any directory ``touch testmodule.yml``.
- Create an inventory file in the same directory ``touch hosts``.
- Populate the inventory file with the variables required to connect to a Windows host(s).
- Add the following to the new playbook file::
---
- name: test out windows module
hosts: windows
tasks:
- name: test out module
win_module:
name: test name
- Run the playbook ``ansible-playbook -i hosts testmodule.yml``
This can be useful for seeing how Ansible runs with
the new module end to end. Other possible ways to test the module are
shown below.
Windows debugging
=================
Debugging a module currently can only be done on a Windows host. This can be
useful when developing a new module or implementing bug fixes. These
are some steps that need to be followed to set this up:
- Copy the module script to the Windows server
- Copy the folders ``./lib/ansible/module_utils/powershell`` and ``./lib/ansible/module_utils/csharp`` to the same directory as the script above
- Add an extra ``#`` to the start of any ``#Requires -Module`` lines in the module code, this is only required for any lines starting with ``#Requires -Module``
- Add the following to the start of the module script that was copied to the server:
.. code-block:: powershell
# Set $ErrorActionPreference to what's set during Ansible execution
$ErrorActionPreference = "Stop"
# Set the first argument as the path to a JSON file that contains the module args
$args = @("$($pwd.Path)\args.json")
# Or instead of an args file, set $complex_args to the pre-processed module args
$complex_args = @{
_ansible_check_mode = $false
_ansible_diff = $false
path = "C:\temp"
state = "present"
}
# Import any C# utils referenced with '#AnsibleRequires -CSharpUtil' or 'using Ansible.;
# The $_csharp_utils entries should be the context of the C# util files and not the path
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.AddType.psm1"
$_csharp_utils = @(
[System.IO.File]::ReadAllText("$($pwd.Path)\csharp\Ansible.Basic.cs")
)
Add-CSharpType -References $_csharp_utils -IncludeDebugInfo
# Import any PowerShell modules referenced with '#Requires -Module`
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.Legacy.psm1"
# End of the setup code and start of the module code
#!powershell
You can add more args to ``$complex_args`` as required by the module or define the module options through a JSON file
with the structure:
.. code-block:: json
{
"ANSIBLE_MODULE_ARGS": {
"_ansible_check_mode": false,
"_ansible_diff": false,
"path": "C:\\temp",
"state": "present"
}
}
There are multiple IDEs that can be used to debug a Powershell script, two of
the most popular ones are
- `Powershell ISE`_
- `Visual Studio Code`_
.. _Powershell ISE: https://docs.microsoft.com/en-us/powershell/scripting/core-powershell/ise/how-to-debug-scripts-in-windows-powershell-ise
.. _Visual Studio Code: https://blogs.technet.microsoft.com/heyscriptingguy/2017/02/06/debugging-powershell-script-in-visual-studio-code-part-1/
To be able to view the arguments as passed by Ansible to the module follow
these steps.
- Prefix the Ansible command with :envvar:`ANSIBLE_KEEP_REMOTE_FILES=1<ANSIBLE_KEEP_REMOTE_FILES>` to specify that Ansible should keep the exec files on the server.
- Log onto the Windows server using the same user account that Ansible used to execute the module.
- Navigate to ``%TEMP%\..``. It should contain a folder starting with ``ansible-tmp-``.
- Inside this folder, open the PowerShell script for the module.
- In this script is a raw JSON script under ``$json_raw`` which contains the module arguments under ``module_args``. These args can be assigned manually to the ``$complex_args`` variable that is defined on your debug script or put in the ``args.json`` file.
Windows unit testing
====================
Currently there is no mechanism to run unit tests for Powershell modules under Ansible CI.
Windows integration testing
===========================
Integration tests for Ansible modules are typically written as Ansible roles. These test
roles are located in ``./test/integration/targets``. You must first set up your testing
environment, and configure a test inventory for Ansible to connect to.
In this example we will set up a test inventory to connect to two hosts and run the integration
tests for win_stat:
- Run the command ``source ./hacking/env-setup`` to prepare environment.
- Create a copy of ``./test/integration/inventory.winrm.template`` and name it ``inventory.winrm``.
- Fill in entries under ``[windows]`` and set the required variables that are needed to connect to the host.
- :ref:`Install the required Python modules <windows_winrm>` to support WinRM and a configured authentication method.
- To execute the integration tests, run ``ansible-test windows-integration win_stat``; you can replace ``win_stat`` with the role you want to test.
This will execute all the tests currently defined for that role. You can set
the verbosity level using the ``-v`` argument just as you would with
ansible-playbook.
When developing tests for a new module, it is recommended to test a scenario once in
check mode and twice not in check mode. This ensures that check mode
does not make any changes but reports a change, as well as that the second run is
idempotent and does not report changes. For example:
.. code-block:: yaml
- name: remove a file (check mode)
win_file:
path: C:\temp
state: absent
register: remove_file_check
check_mode: yes
- name: get result of remove a file (check mode)
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual_check
- name: assert remove a file (check mode)
assert:
that:
- remove_file_check is changed
- remove_file_actual_check.stdout == 'true\r\n'
- name: remove a file
win_file:
path: C:\temp
state: absent
register: remove_file
- name: get result of remove a file
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual
- name: assert remove a file
assert:
that:
- remove_file is changed
- remove_file_actual.stdout == 'false\r\n'
- name: remove a file (idempotent)
win_file:
path: C:\temp
state: absent
register: remove_file_again
- name: assert remove a file (idempotent)
assert:
that:
- not remove_file_again is changed
Windows communication and development support
=============================================
Join the ``#ansible-devel`` or ``#ansible-windows`` chat channels (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) for discussions about Ansible development for Windows.
For questions and discussions pertaining to using the Ansible product,
use the ``#ansible`` channel.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,899 |
Docs: Replace occurrences of "See http://" with a descriptive label in 4 files
|
### Summary
Accessibility guidelines recommend we do not use "See http://<website>" in documentation, but instead provide context around this for screen readers etc.
In this issue, we've identified 4 files that use this convention in the documentation. For each occurrence, replace it with an RST link to [external web page](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages) .
Specifically, use the format \`descriptive phrase \<url\>\`_
List of affected RST pages are in a follow-on comment. You can choose to fix one at a time, using the Edit on GitHub link at the top of the RST page, or in one PR to fix them both.
### Issue Type
Documentation Report
### Component Name
rst/dev_guide/testing/sanity/mypy.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78899
|
https://github.com/ansible/ansible/pull/78959
|
fb8c2daf46d3a9293ee9ea6555279aca0fb62b9a
|
f7c01bc866b1e531f0eacc57ab98294b4745a221
| 2022-09-27T20:47:03Z |
python
| 2022-10-03T19:01:27Z |
docs/docsite/rst/dev_guide/developing_module_utilities.rst
|
.. _developing_module_utilities:
*************************************
Using and developing module utilities
*************************************
Ansible provides a number of module utilities, or snippets of shared code, that
provide helper functions you can use when developing your own modules. The
``basic.py`` module utility provides the main entry point for accessing the
Ansible library, and all Python Ansible modules must import something from
``ansible.module_utils``. A common option is to import ``AnsibleModule``:
.. code-block:: python
from ansible.module_utils.basic import AnsibleModule
The ``ansible.module_utils`` namespace is not a plain Python package: it is
constructed dynamically for each task invocation, by extracting imports and
resolving those matching the namespace against a :ref:`search path <ansible_search_path>` derived from the
active configuration.
To reduce the maintenance burden in a collection or in local modules, you can extract
duplicated code into one or more module utilities and import them into your modules. For example, if you have your own custom modules that import a ``my_shared_code`` library, you can place that into a ``./module_utils/my_shared_code.py`` file like this:
.. code-block:: python
from ansible.module_utils.my_shared_code import MySharedCodeClient
When you run ``ansible-playbook``, Ansible will merge any files in your local ``module_utils`` directories into the ``ansible.module_utils`` namespace in the order defined by the :ref:`Ansible search path <ansible_search_path>`.
Naming and finding module utilities
===================================
You can generally tell what a module utility does from its name and/or its location. Generic utilities (shared code used by many different kinds of modules) live in the main ansible/ansible codebase, in the ``common`` subdirectory or in the root directory of ``lib/ansible/module_utils``. Utilities used by a particular set of modules generally live in the same collection as those modules. For example:
* ``lib/ansible/module_utils/urls.py`` contains shared code for parsing URLs
* ``openstack.cloud.plugins.module_utils.openstack.py`` contains utilities for modules that work with OpenStack instances
* ``ansible.netcommon.plugins.module_utils.network.common.config.py`` contains utility functions for use by networking modules
Following this pattern with your own module utilities makes everything easy to find and use.
.. _standard_mod_utils:
Standard module utilities
=========================
Ansible ships with an extensive library of ``module_utils`` files. You can find the module utility source code in the ``lib/ansible/module_utils`` directory under your main Ansible path. We describe the most widely used utilities below. For more details on any specific module utility, please see the `source code for module_utils <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils>`_.
.. include:: shared_snippets/licensing.txt
- ``api.py`` - Supports generic API modules
- ``basic.py`` - General definitions and helper utilities for Ansible modules
- ``common/dict_transformations.py`` - Helper functions for dictionary transformations
- ``common/file.py`` - Helper functions for working with files
- ``common/text/`` - Helper functions for converting and formatting text
- ``common/parameters.py`` - Helper functions for dealing with module parameters
- ``common/sys_info.py`` - Functions for getting distribution and platform information
- ``common/validation.py`` - Helper functions for validating module parameters against a module argument spec
- ``facts/`` - Directory of utilities for modules that return facts. See `PR 23012 <https://github.com/ansible/ansible/pull/23012>`_ for more information
- ``json_utils.py`` - Utilities for filtering unrelated output around module JSON output, like leading and trailing lines
- ``powershell/`` - Directory of definitions and helper functions for Windows PowerShell modules
- ``pycompat24.py`` - Exception workaround for Python 2.4
- ``service.py`` - Utilities to enable modules to work with Linux services (placeholder, not in use)
- ``six/__init__.py`` - Bundled copy of the `Six Python library <https://pypi.org/project/six/>`_ to aid in writing code compatible with both Python 2 and Python 3
- ``splitter.py`` - String splitting and manipulation utilities for working with Jinja2 templates
- ``urls.py`` - Utilities for working with http and https requests
Several commonly-used utilities migrated to collections in Ansible 2.10, including:
- ``ismount.py`` migrated to ``ansible.posix.plugins.module_utils.mount.py`` - Single helper function that fixes os.path.ismount
- ``known_hosts.py`` migrated to ``community.general.plugins.module_utils.known_hosts.py`` - utilities for working with known_hosts file
For a list of migrated content with destination collections, see https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,899 |
Docs: Replace occurrences of "See http://" with a descriptive label in 4 files
|
### Summary
Accessibility guidelines recommend we do not use "See http://<website>" in documentation, but instead provide context around this for screen readers etc.
In this issue, we've identified 4 files that use this convention in the documentation. For each occurrence, replace it with an RST link to [external web page](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages) .
Specifically, use the format \`descriptive phrase \<url\>\`_
List of affected RST pages are in a follow-on comment. You can choose to fix one at a time, using the Edit on GitHub link at the top of the RST page, or in one PR to fix them both.
### Issue Type
Documentation Report
### Component Name
rst/dev_guide/testing/sanity/mypy.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78899
|
https://github.com/ansible/ansible/pull/78959
|
fb8c2daf46d3a9293ee9ea6555279aca0fb62b9a
|
f7c01bc866b1e531f0eacc57ab98294b4745a221
| 2022-09-27T20:47:03Z |
python
| 2022-10-03T19:01:27Z |
docs/docsite/rst/dev_guide/testing/sanity/mypy.rst
|
mypy
====
The ``mypy`` static type checker is used to check the following code against each Python version supported by the controller:
* ``lib/ansible/``
* ``test/lib/ansible_test/_internal/``
Additionally, the following code is checked against Python versions supported only on managed nodes:
* ``lib/ansible/modules/``
* ``lib/ansible/module_utils/``
See https://mypy.readthedocs.io/en/stable/ for additional details.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,899 |
Docs: Replace occurrences of "See http://" with a descriptive label in 4 files
|
### Summary
Accessibility guidelines recommend we do not use "See http://<website>" in documentation, but instead provide context around this for screen readers etc.
In this issue, we've identified 4 files that use this convention in the documentation. For each occurrence, replace it with an RST link to [external web page](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages) .
Specifically, use the format \`descriptive phrase \<url\>\`_
List of affected RST pages are in a follow-on comment. You can choose to fix one at a time, using the Edit on GitHub link at the top of the RST page, or in one PR to fix them both.
### Issue Type
Documentation Report
### Component Name
rst/dev_guide/testing/sanity/mypy.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78899
|
https://github.com/ansible/ansible/pull/78959
|
fb8c2daf46d3a9293ee9ea6555279aca0fb62b9a
|
f7c01bc866b1e531f0eacc57ab98294b4745a221
| 2022-09-27T20:47:03Z |
python
| 2022-10-03T19:01:27Z |
docs/docsite/rst/dev_guide/testing_validate-modules.rst
|
:orphan:
.. _testing_validate-modules:
****************
validate-modules
****************
.. contents:: Topics
Python program to help test or validate Ansible modules.
``validate-modules`` is one of the ``ansible-test`` Sanity Tests, see :ref:`testing_sanity` for more information.
Originally developed by Matt Martz (@sivel)
Usage
=====
.. code:: shell
cd /path/to/ansible/source
source hacking/env-setup
ansible-test sanity --test validate-modules
Help
====
.. code:: shell
usage: validate-modules [-h] [-w] [--exclude EXCLUDE] [--arg-spec]
[--base-branch BASE_BRANCH] [--format {json,plain}]
[--output OUTPUT]
modules [modules ...]
positional arguments:
modules Path to module or module directory
optional arguments:
-h, --help show this help message and exit
-w, --warnings Show warnings
--exclude EXCLUDE RegEx exclusion pattern
--arg-spec Analyze module argument spec
--base-branch BASE_BRANCH
Used in determining if new options were added
--format {json,plain}
Output format. Default: "plain"
--output OUTPUT Output location, use "-" for stdout. Default "-"
Extending validate-modules
==========================
The ``validate-modules`` tool has a `schema.py <https://github.com/ansible/ansible/blob/devel/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/schema.py>`_ that is used to validate the YAML blocks, such as ``DOCUMENTATION`` and ``RETURNS``.
Codes
=====
============================================================ ================== ==================== =========================================================================================
**Error Code** **Type** **Level** **Sample Message**
------------------------------------------------------------ ------------------ -------------------- -----------------------------------------------------------------------------------------
ansible-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier Ansible version
collection-deprecated-module Documentation Error A module is deprecated and supposed to be removed in the current or an earlier collection version
ansible-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier Ansible version
ansible-module-not-initialized Syntax Error Execution of the module did not result in initialization of AnsibleModule
collection-deprecated-version Documentation Error A feature is deprecated and supposed to be removed in the current or an earlier collection version
deprecated-date Documentation Error A date before today appears as ``removed_at_date`` or in ``deprecated_aliases``
deprecation-mismatch Documentation Error Module marked as deprecated or removed in at least one of the filename, its metadata, or in DOCUMENTATION (setting DOCUMENTATION.deprecated for deprecation or removing all Documentation for removed) but not in all three places.
doc-choices-do-not-match-spec Documentation Error Value for "choices" from the argument_spec does not match the documentation
doc-choices-incompatible-type Documentation Error Choices value from the documentation is not compatible with type defined in the argument_spec
doc-default-does-not-match-spec Documentation Error Value for "default" from the argument_spec does not match the documentation
doc-default-incompatible-type Documentation Error Default value from the documentation is not compatible with type defined in the argument_spec
doc-elements-invalid Documentation Error Documentation specifies elements for argument, when "type" is not ``list``.
doc-elements-mismatch Documentation Error Argument_spec defines elements different than documentation does
doc-missing-type Documentation Error Documentation doesn't specify a type but argument in ``argument_spec`` use default type (``str``)
doc-required-mismatch Documentation Error argument in argument_spec is required but documentation says it is not, or vice versa
doc-type-does-not-match-spec Documentation Error Argument_spec defines type different than documentation does
documentation-error Documentation Error Unknown ``DOCUMENTATION`` error
documentation-syntax-error Documentation Error Invalid ``DOCUMENTATION`` schema
illegal-future-imports Imports Error Only the following ``from __future__`` imports are allowed: ``absolute_import``, ``division``, and ``print_function``.
import-before-documentation Imports Error Import found before documentation variables. All imports must appear below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
import-error Documentation Error ``Exception`` attempting to import module for ``argument_spec`` introspection
import-placement Locations Warning Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
imports-improper-location Imports Error Imports should be directly below ``DOCUMENTATION``/``EXAMPLES``/``RETURN``
incompatible-choices Documentation Error Choices value from the argument_spec is not compatible with type defined in the argument_spec
incompatible-default-type Documentation Error Default value from the argument_spec is not compatible with type defined in the argument_spec
invalid-argument-name Documentation Error Argument in argument_spec must not be one of 'message', 'syslog_facility' as it is used internally by Ansible Core Engine
invalid-argument-spec Documentation Error Argument in argument_spec must be a dictionary/hash when used
invalid-argument-spec-options Documentation Error Suboptions in argument_spec are invalid
invalid-documentation Documentation Error ``DOCUMENTATION`` is not valid YAML
invalid-documentation-markup Documentation Error ``DOCUMENTATION`` or ``RETURN`` contains invalid markup
invalid-documentation-options Documentation Error ``DOCUMENTATION.options`` must be a dictionary/hash when used
invalid-examples Documentation Error ``EXAMPLES`` is not valid YAML
invalid-extension Naming Error Official Ansible modules must have a ``.py`` extension for python modules or a ``.ps1`` for powershell modules
invalid-module-schema Documentation Error ``AnsibleModule`` schema validation error
invalid-removal-version Documentation Error The version at which a feature is supposed to be removed cannot be parsed (for collections, it must be a semantic version, see https://semver.org/)
invalid-requires-extension Naming Error Module ``#AnsibleRequires -CSharpUtil`` should not end in .cs, Module ``#Requires`` should not end in .psm1
missing-doc-fragment Documentation Error ``DOCUMENTATION`` fragment missing
missing-existing-doc-fragment Documentation Warning Pre-existing ``DOCUMENTATION`` fragment missing
missing-documentation Documentation Error No ``DOCUMENTATION`` provided
missing-examples Documentation Error No ``EXAMPLES`` provided
missing-gplv3-license Documentation Error GPLv3 license header not found
missing-module-utils-basic-import Imports Warning Did not find ``ansible.module_utils.basic`` import
missing-module-utils-import-csharp-requirements Imports Error No ``Ansible.ModuleUtils`` or C# Ansible util requirements/imports found
missing-powershell-interpreter Syntax Error Interpreter line is not ``#!powershell``
missing-python-interpreter Syntax Error Interpreter line is not ``#!/usr/bin/python``
missing-return Documentation Error No ``RETURN`` documentation provided
missing-return-legacy Documentation Warning No ``RETURN`` documentation provided for legacy module
missing-suboption-docs Documentation Error Argument in argument_spec has sub-options but documentation does not define sub-options
module-incorrect-version-added Documentation Error Module level ``version_added`` is incorrect
module-invalid-version-added Documentation Error Module level ``version_added`` is not a valid version number
module-utils-specific-import Imports Error ``module_utils`` imports should import specific components, not ``*``
multiple-utils-per-requires Imports Error ``Ansible.ModuleUtils`` requirements do not support multiple modules per statement
multiple-csharp-utils-per-requires Imports Error Ansible C# util requirements do not support multiple utils per statement
no-default-for-required-parameter Documentation Error Option is marked as required but specifies a default. Arguments with a default should not be marked as required
no-log-needed Parameters Error Option name suggests that the option contains a secret value, while ``no_log`` is not specified for this option in the argument spec. If this is a false positive, explicitly set ``no_log=False``
nonexistent-parameter-documented Documentation Error Argument is listed in DOCUMENTATION.options, but not accepted by the module
option-incorrect-version-added Documentation Error ``version_added`` for new option is incorrect
option-invalid-version-added Documentation Error ``version_added`` for option is not a valid version number
parameter-invalid Documentation Error Argument in argument_spec is not a valid python identifier
parameter-invalid-elements Documentation Error Value for "elements" is valid only when value of "type" is ``list``
implied-parameter-type-mismatch Documentation Error Argument_spec implies ``type="str"`` but documentation defines it as different data type
parameter-type-not-in-doc Documentation Error Type value is defined in ``argument_spec`` but documentation doesn't specify a type
parameter-alias-repeated Parameters Error argument in argument_spec has at least one alias specified multiple times in aliases
parameter-alias-self Parameters Error argument in argument_spec is specified as its own alias
parameter-documented-multiple-times Documentation Error argument in argument_spec with aliases is documented multiple times
parameter-list-no-elements Parameters Error argument in argument_spec "type" is specified as ``list`` without defining "elements"
parameter-state-invalid-choice Parameters Error Argument ``state`` includes ``get``, ``list`` or ``info`` as a choice. Functionality should be in an ``_info`` or (if further conditions apply) ``_facts`` module.
python-syntax-error Syntax Error Python ``SyntaxError`` while parsing module
removal-version-must-be-major Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be removed are major versions (x.0.0)
return-syntax-error Documentation Error ``RETURN`` is not valid YAML, ``RETURN`` fragments missing or invalid
return-invalid-version-added Documentation Error ``version_added`` for return value is not a valid version number
subdirectory-missing-init Naming Error Ansible module subdirectories must contain an ``__init__.py``
try-except-missing-has Imports Warning Try/Except ``HAS_`` expression missing
undocumented-parameter Documentation Error Argument is listed in the argument_spec, but not documented in the module
unidiomatic-typecheck Syntax Error Type comparison using ``type()`` found. Use ``isinstance()`` instead
unknown-doc-fragment Documentation Warning Unknown pre-existing ``DOCUMENTATION`` error
use-boto3 Imports Error ``boto`` import found, new modules should use ``boto3``
use-fail-json-not-sys-exit Imports Error ``sys.exit()`` call found. Should be ``exit_json``/``fail_json``
use-module-utils-urls Imports Error ``requests`` import found, should use ``ansible.module_utils.urls`` instead
use-run-command-not-os-call Imports Error ``os.call`` used instead of ``module.run_command``
use-run-command-not-popen Imports Error ``subprocess.Popen`` used instead of ``module.run_command``
use-short-gplv3-license Documentation Error GPLv3 license header should be the :ref:`short form <copyright>` for new modules
mutually_exclusive-type Documentation Error mutually_exclusive entry contains non-string value
mutually_exclusive-collision Documentation Error mutually_exclusive entry has repeated terms
mutually_exclusive-unknown Documentation Error mutually_exclusive entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_one_of-type Documentation Error required_one_of entry contains non-string value
required_one_of-collision Documentation Error required_one_of entry has repeated terms
required_one_of-unknown Documentation Error required_one_of entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_together-type Documentation Error required_together entry contains non-string value
required_together-collision Documentation Error required_together entry has repeated terms
required_together-unknown Documentation Error required_together entry contains option which does not appear in argument_spec (potentially an alias of an option?)
required_if-is_one_of-type Documentation Error required_if entry has a fourth value which is not a bool
required_if-requirements-type Documentation Error required_if entry has a third value (requirements) which is not a list or tuple
required_if-requirements-collision Documentation Error required_if entry has repeated terms in requirements
required_if-requirements-unknown Documentation Error required_if entry's requirements contains option which does not appear in argument_spec (potentially an alias of an option?)
required_if-unknown-key Documentation Error required_if entry's key does not appear in argument_spec (potentially an alias of an option?)
required_if-key-in-requirements Documentation Error required_if entry contains its key in requirements list/tuple
required_if-value-type Documentation Error required_if entry's value is not of the type specified for its key
required_by-collision Documentation Error required_by entry has repeated terms
required_by-unknown Documentation Error required_by entry contains option which does not appear in argument_spec (potentially an alias of an option?)
version-added-must-be-major-or-minor Documentation Error According to the semantic versioning specification (https://semver.org/), the only versions in which features are allowed to be added are major and minor versions (x.y.0)
============================================================ ================== ==================== =========================================================================================
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,899 |
Docs: Replace occurrences of "See http://" with a descriptive label in 4 files
|
### Summary
Accessibility guidelines recommend we do not use "See http://<website>" in documentation, but instead provide context around this for screen readers etc.
In this issue, we've identified 4 files that use this convention in the documentation. For each occurrence, replace it with an RST link to [external web page](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages) .
Specifically, use the format \`descriptive phrase \<url\>\`_
List of affected RST pages are in a follow-on comment. You can choose to fix one at a time, using the Edit on GitHub link at the top of the RST page, or in one PR to fix them both.
### Issue Type
Documentation Report
### Component Name
rst/dev_guide/testing/sanity/mypy.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78899
|
https://github.com/ansible/ansible/pull/78959
|
fb8c2daf46d3a9293ee9ea6555279aca0fb62b9a
|
f7c01bc866b1e531f0eacc57ab98294b4745a221
| 2022-09-27T20:47:03Z |
python
| 2022-10-03T19:01:27Z |
docs/docsite/rst/playbook_guide/playbooks_filters.rst
|
.. _playbooks_filters:
********************************
Using filters to manipulate data
********************************
Filters let you transform JSON data into YAML data, split a URL to extract the hostname, get the SHA1 hash of a string, add or multiply integers, and much more. You can use the Ansible-specific filters documented here to manipulate your data, or use any of the standard filters shipped with Jinja2 - see the list of :ref:`built-in filters <jinja2:builtin-filters>` in the official Jinja2 template documentation. You can also use :ref:`Python methods <jinja2:python-methods>` to transform data. You can :ref:`create custom Ansible filters as plugins <developing_filter_plugins>`, though we generally welcome new filters into the ansible-core repo so everyone can use them.
Because templating happens on the Ansible controller, **not** on the target host, filters execute on the controller and transform data locally.
.. contents::
:local:
Handling undefined variables
============================
Filters can help you manage missing or undefined variables by providing defaults or making some variables optional. If you configure Ansible to ignore most undefined variables, you can mark some variables as requiring values with the ``mandatory`` filter.
.. _defaulting_undefined_variables:
Providing default values
------------------------
You can provide default values for variables directly in your templates using the Jinja2 'default' filter. This is often a better approach than failing if a variable is not defined:
.. code-block:: yaml+jinja
{{ some_variable | default(5) }}
In the above example, if the variable 'some_variable' is not defined, Ansible uses the default value 5, rather than raising an "undefined variable" error and failing. If you are working within a role, you can also add a ``defaults/main.yml`` to define the default values for variables in your role.
Beginning in version 2.8, attempting to access an attribute of an Undefined value in Jinja will return another Undefined value, rather than throwing an error immediately. This means that you can now simply use
a default with a value in a nested data structure (in other words, :code:`{{ foo.bar.baz | default('DEFAULT') }}`) when you do not know if the intermediate values are defined.
If you want to use the default value when variables evaluate to false or an empty string you have to set the second parameter to ``true``:
.. code-block:: yaml+jinja
{{ lookup('env', 'MY_USER') | default('admin', true) }}
.. _omitting_undefined_variables:
Making variables optional
-------------------------
By default Ansible requires values for all variables in a templated expression. However, you can make specific variables optional. For example, you might want to use a system default for some items and control the value for others. To make a variable optional, set the default value to the special variable ``omit``:
.. code-block:: yaml+jinja
- name: Touch files with an optional mode
ansible.builtin.file:
dest: "{{ item.path }}"
state: touch
mode: "{{ item.mode | default(omit) }}"
loop:
- path: /tmp/foo
- path: /tmp/bar
- path: /tmp/baz
mode: "0444"
In this example, the default mode for the files ``/tmp/foo`` and ``/tmp/bar`` is determined by the umask of the system. Ansible does not send a value for ``mode``. Only the third file, ``/tmp/baz``, receives the `mode=0444` option.
.. note:: If you are "chaining" additional filters after the ``default(omit)`` filter, you should instead do something like this:
``"{{ foo | default(None) | some_filter or omit }}"``. In this example, the default ``None`` (Python null) value will cause the later filters to fail, which will trigger the ``or omit`` portion of the logic. Using ``omit`` in this manner is very specific to the later filters you are chaining though, so be prepared for some trial and error if you do this.
.. _forcing_variables_to_be_defined:
Defining mandatory values
-------------------------
If you configure Ansible to ignore undefined variables, you may want to define some values as mandatory. By default, Ansible fails if a variable in your playbook or command is undefined. You can configure Ansible to allow undefined variables by setting :ref:`DEFAULT_UNDEFINED_VAR_BEHAVIOR` to ``false``. In that case, you may want to require some variables to be defined. You can do this with:
.. code-block:: yaml+jinja
{{ variable | mandatory }}
The variable value will be used as is, but the template evaluation will raise an error if it is undefined.
A convenient way of requiring a variable to be overridden is to give it an undefined value using the ``undef`` keyword. This can be useful in a role's defaults.
.. code-block:: yaml+jinja
galaxy_url: "https://galaxy.ansible.com"
galaxy_api_key: {{ undef(hint="You must specify your Galaxy API key") }}
Defining different values for true/false/null (ternary)
=======================================================
You can create a test, then define one value to use when the test returns true and another when the test returns false (new in version 1.9):
.. code-block:: yaml+jinja
{{ (status == 'needs_restart') | ternary('restart', 'continue') }}
In addition, you can define a one value to use on true, one value on false and a third value on null (new in version 2.8):
.. code-block:: yaml+jinja
{{ enabled | ternary('no shutdown', 'shutdown', omit) }}
Managing data types
===================
You might need to know, change, or set the data type on a variable. For example, a registered variable might contain a dictionary when your next task needs a list, or a user :ref:`prompt <playbooks_prompts>` might return a string when your playbook needs a boolean value. Use the ``type_debug``, ``dict2items``, and ``items2dict`` filters to manage data types. You can also use the data type itself to cast a value as a specific data type.
Discovering the data type
-------------------------
.. versionadded:: 2.3
If you are unsure of the underlying Python type of a variable, you can use the ``type_debug`` filter to display it. This is useful in debugging when you need a particular type of variable:
.. code-block:: yaml+jinja
{{ myvar | type_debug }}
You should note that, while this may seem like a useful filter for checking that you have the right type of data in a variable, you should often prefer :ref:`type tests <type_tests>`, which will allow you to test for specific data types.
.. _dict_filter:
Transforming dictionaries into lists
------------------------------------
.. versionadded:: 2.6
Use the ``dict2items`` filter to transform a dictionary into a list of items suitable for :ref:`looping <playbooks_loops>`:
.. code-block:: yaml+jinja
{{ dict | dict2items }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
tags:
Application: payment
Environment: dev
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- key: Application
value: payment
- key: Environment
value: dev
.. versionadded:: 2.8
The ``dict2items`` filter is the reverse of the ``items2dict`` filter.
If you want to configure the names of the keys, the ``dict2items`` filter accepts 2 keyword arguments. Pass the ``key_name`` and ``value_name`` arguments to configure the names of the keys in the list output:
.. code-block:: yaml+jinja
{{ files | dict2items(key_name='file', value_name='path') }}
Dictionary data (before applying the ``dict2items`` filter):
.. code-block:: yaml
files:
users: /etc/passwd
groups: /etc/group
List data (after applying the ``dict2items`` filter):
.. code-block:: yaml
- file: users
path: /etc/passwd
- file: groups
path: /etc/group
Transforming lists into dictionaries
------------------------------------
.. versionadded:: 2.7
Use the ``items2dict`` filter to transform a list into a dictionary, mapping the content into ``key: value`` pairs:
.. code-block:: yaml+jinja
{{ tags | items2dict }}
List data (before applying the ``items2dict`` filter):
.. code-block:: yaml
tags:
- key: Application
value: payment
- key: Environment
value: dev
Dictionary data (after applying the ``items2dict`` filter):
.. code-block:: text
Application: payment
Environment: dev
The ``items2dict`` filter is the reverse of the ``dict2items`` filter.
Not all lists use ``key`` to designate keys and ``value`` to designate values. For example:
.. code-block:: yaml
fruits:
- fruit: apple
color: red
- fruit: pear
color: yellow
- fruit: grapefruit
color: yellow
In this example, you must pass the ``key_name`` and ``value_name`` arguments to configure the transformation. For example:
.. code-block:: yaml+jinja
{{ tags | items2dict(key_name='fruit', value_name='color') }}
If you do not pass these arguments, or do not pass the correct values for your list, you will see ``KeyError: key`` or ``KeyError: my_typo``.
Forcing the data type
---------------------
You can cast values as certain types. For example, if you expect the input "True" from a :ref:`vars_prompt <playbooks_prompts>` and you want Ansible to recognize it as a boolean value instead of a string:
.. code-block:: yaml
- ansible.builtin.debug:
msg: test
when: some_string_value | bool
If you want to perform a mathematical comparison on a fact and you want Ansible to recognize it as an integer instead of a string:
.. code-block:: yaml
- shell: echo "only on Red Hat 6, derivatives, and later"
when: ansible_facts['os_family'] == "RedHat" and ansible_facts['lsb']['major_release'] | int >= 6
.. versionadded:: 1.6
.. _filters_for_formatting_data:
Formatting data: YAML and JSON
==============================
You can switch a data structure in a template from or to JSON or YAML format, with options for formatting, indenting, and loading data. The basic filters are occasionally useful for debugging:
.. code-block:: yaml+jinja
{{ some_variable | to_json }}
{{ some_variable | to_yaml }}
For human readable output, you can use:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json }}
{{ some_variable | to_nice_yaml }}
You can change the indentation of either format:
.. code-block:: yaml+jinja
{{ some_variable | to_nice_json(indent=2) }}
{{ some_variable | to_nice_yaml(indent=8) }}
The ``to_yaml`` and ``to_nice_yaml`` filters use the `PyYAML library`_ which has a default 80 symbol string length limit. That causes unexpected line break after 80th symbol (if there is a space after 80th symbol)
To avoid such behavior and generate long lines, use the ``width`` option. You must use a hardcoded number to define the width, instead of a construction like ``float("inf")``, because the filter does not support proxying Python functions. For example:
.. code-block:: yaml+jinja
{{ some_variable | to_yaml(indent=8, width=1337) }}
{{ some_variable | to_nice_yaml(indent=8, width=1337) }}
The filter does support passing through other YAML parameters. For a full list, see the `PyYAML documentation`_ for ``dump()``.
If you are reading in some already formatted data:
.. code-block:: yaml+jinja
{{ some_variable | from_json }}
{{ some_variable | from_yaml }}
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register JSON output as a variable
ansible.builtin.shell: cat /some/path/to/file.json
register: result
- name: Set a variable
ansible.builtin.set_fact:
myvar: "{{ result.stdout | from_json }}"
Filter `to_json` and Unicode support
------------------------------------
By default `to_json` and `to_nice_json` will convert data received to ASCII, so:
.. code-block:: yaml+jinja
{{ 'München'| to_json }}
will return:
.. code-block:: text
'M\u00fcnchen'
To keep Unicode characters, pass the parameter `ensure_ascii=False` to the filter:
.. code-block:: yaml+jinja
{{ 'München'| to_json(ensure_ascii=False) }}
'München'
.. versionadded:: 2.7
To parse multi-document YAML strings, the ``from_yaml_all`` filter is provided.
The ``from_yaml_all`` filter will return a generator of parsed YAML documents.
for example:
.. code-block:: yaml+jinja
tasks:
- name: Register a file content as a variable
ansible.builtin.shell: cat /some/path/to/multidoc-file.yaml
register: result
- name: Print the transformed variable
ansible.builtin.debug:
msg: '{{ item }}'
loop: '{{ result.stdout | from_yaml_all | list }}'
Combining and selecting data
============================
You can combine data from multiple sources and types, and select values from large data structures, giving you precise control over complex data.
.. _zip_filter_example:
Combining items from multiple lists: zip and zip_longest
--------------------------------------------------------
.. versionadded:: 2.3
To get a list combining the elements of other lists use ``zip``:
.. code-block:: yaml+jinja
- name: Give me list combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5,6] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"], [4, "d"], [5, "e"], [6, "f"]]
- name: Give me shortest combo of two lists
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip(['a','b','c','d','e','f']) | list }}"
# => [[1, "a"], [2, "b"], [3, "c"]]
To always exhaust all lists use ``zip_longest``:
.. code-block:: yaml+jinja
- name: Give me longest combo of three lists , fill with X
ansible.builtin.debug:
msg: "{{ [1,2,3] | zip_longest(['a','b','c','d','e','f'], [21, 22, 23], fillvalue='X') | list }}"
# => [[1, "a", 21], [2, "b", 22], [3, "c", 23], ["X", "d", "X"], ["X", "e", "X"], ["X", "f", "X"]]
Similarly to the output of the ``items2dict`` filter mentioned above, these filters can be used to construct a ``dict``:
.. code-block:: yaml+jinja
{{ dict(keys_list | zip(values_list)) }}
List data (before applying the ``zip`` filter):
.. code-block:: yaml
keys_list:
- one
- two
values_list:
- apple
- orange
Dictionary data (after applying the ``zip`` filter):
.. code-block:: yaml
one: apple
two: orange
Combining objects and subelements
---------------------------------
.. versionadded:: 2.7
The ``subelements`` filter produces a product of an object and the subelement values of that object, similar to the ``subelements`` lookup. This lets you specify individual subelements to use in a template. For example, this expression:
.. code-block:: yaml+jinja
{{ users | subelements('groups', skip_missing=True) }}
Data before applying the ``subelements`` filter:
.. code-block:: yaml
users:
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
groups:
- wheel
- docker
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
Data after applying the ``subelements`` filter:
.. code-block:: yaml
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- wheel
-
- name: alice
groups:
- wheel
- docker
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
- docker
-
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
groups:
- docker
- docker
You can use the transformed data with ``loop`` to iterate over the same subelement for multiple objects:
.. code-block:: yaml+jinja
- name: Set authorized ssh key, extracting just that data from 'users'
ansible.posix.authorized_key:
user: "{{ item.0.name }}"
key: "{{ lookup('file', item.1) }}"
loop: "{{ users | subelements('authorized') }}"
.. _combine_filter:
Combining hashes/dictionaries
-----------------------------
.. versionadded:: 2.0
The ``combine`` filter allows hashes to be merged. For example, the following would override keys in one hash:
.. code-block:: yaml+jinja
{{ {'a':1, 'b':2} | combine({'b':3}) }}
The resulting hash would be:
.. code-block:: text
{'a':1, 'b':3}
The filter can also take multiple arguments to merge:
.. code-block:: yaml+jinja
{{ a | combine(b, c, d) }}
{{ [a, b, c, d] | combine }}
In this case, keys in ``d`` would override those in ``c``, which would override those in ``b``, and so on.
The filter also accepts two optional parameters: ``recursive`` and ``list_merge``.
recursive
Is a boolean, default to ``False``.
Should the ``combine`` recursively merge nested hashes.
Note: It does **not** depend on the value of the ``hash_behaviour`` setting in ``ansible.cfg``.
list_merge
Is a string, its possible values are ``replace`` (default), ``keep``, ``append``, ``prepend``, ``append_rp`` or ``prepend_rp``.
It modifies the behaviour of ``combine`` when the hashes to merge contain arrays/lists.
.. code-block:: yaml
default:
a:
x: default
y: default
b: default
c: default
patch:
a:
y: patch
z: patch
b: patch
If ``recursive=False`` (the default), nested hash aren't merged:
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
y: patch
z: patch
b: patch
c: default
If ``recursive=True``, recurse into nested hash and merge their keys:
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True) }}
This would result in:
.. code-block:: yaml
a:
x: default
y: patch
z: patch
b: patch
c: default
If ``list_merge='replace'`` (the default), arrays from the right hash will "replace" the ones in the left hash:
.. code-block:: yaml
default:
a:
- default
patch:
a:
- patch
.. code-block:: yaml+jinja
{{ default | combine(patch) }}
This would result in:
.. code-block:: yaml
a:
- patch
If ``list_merge='keep'``, arrays from the left hash will be kept:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='keep') }}
This would result in:
.. code-block:: yaml
a:
- default
If ``list_merge='append'``, arrays from the right hash will be appended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append') }}
This would result in:
.. code-block:: yaml
a:
- default
- patch
If ``list_merge='prepend'``, arrays from the right hash will be prepended to the ones in the left hash:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend') }}
This would result in:
.. code-block:: yaml
a:
- patch
- default
If ``list_merge='append_rp'``, arrays from the right hash will be appended to the ones in the left hash. Elements of arrays in the left hash that are also in the corresponding array of the right hash will be removed ("rp" stands for "remove present"). Duplicate elements that aren't in both hashes are kept:
.. code-block:: yaml
default:
a:
- 1
- 1
- 2
- 3
patch:
a:
- 3
- 4
- 5
- 5
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
- 1
- 1
- 2
- 3
- 4
- 5
- 5
If ``list_merge='prepend_rp'``, the behavior is similar to the one for ``append_rp``, but elements of arrays in the right hash are prepended:
.. code-block:: yaml+jinja
{{ default | combine(patch, list_merge='prepend_rp') }}
This would result in:
.. code-block:: yaml
a:
- 3
- 4
- 5
- 5
- 1
- 1
- 2
``recursive`` and ``list_merge`` can be used together:
.. code-block:: yaml
default:
a:
a':
x: default_value
y: default_value
list:
- default_value
b:
- 1
- 1
- 2
- 3
patch:
a:
a':
y: patch_value
z: patch_value
list:
- patch_value
b:
- 3
- 4
- 4
- key: value
.. code-block:: yaml+jinja
{{ default | combine(patch, recursive=True, list_merge='append_rp') }}
This would result in:
.. code-block:: yaml
a:
a':
x: default_value
y: patch_value
z: patch_value
list:
- default_value
- patch_value
b:
- 1
- 1
- 2
- 3
- 4
- 4
- key: value
.. _extract_filter:
Selecting values from arrays or hashtables
-------------------------------------------
.. versionadded:: 2.1
The `extract` filter is used to map from a list of indices to a list of values from a container (hash or array):
.. code-block:: yaml+jinja
{{ [0,2] | map('extract', ['x','y','z']) | list }}
{{ ['x','y'] | map('extract', {'x': 42, 'y': 31}) | list }}
The results of the above expressions would be:
.. code-block:: none
['x', 'z']
[42, 31]
The filter can take another argument:
.. code-block:: yaml+jinja
{{ groups['x'] | map('extract', hostvars, 'ec2_ip_address') | list }}
This takes the list of hosts in group 'x', looks them up in `hostvars`, and then looks up the `ec2_ip_address` of the result. The final result is a list of IP addresses for the hosts in group 'x'.
The third argument to the filter can also be a list, for a recursive lookup inside the container:
.. code-block:: yaml+jinja
{{ ['a'] | map('extract', b, ['x','y']) | list }}
This would return a list containing the value of `b['a']['x']['y']`.
Combining lists
---------------
This set of filters returns a list of combined lists.
permutations
^^^^^^^^^^^^
To get permutations of a list:
.. code-block:: yaml+jinja
- name: Give me largest permutations (order matters)
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations | list }}"
- name: Give me permutations of sets of three
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.permutations(3) | list }}"
combinations
^^^^^^^^^^^^
Combinations always require a set size:
.. code-block:: yaml+jinja
- name: Give me combinations for sets of two
ansible.builtin.debug:
msg: "{{ [1,2,3,4,5] | ansible.builtin.combinations(2) | list }}"
Also see the :ref:`zip_filter`
products
^^^^^^^^
The product filter returns the `cartesian product <https://docs.python.org/3/library/itertools.html#itertools.product>`_ of the input iterables. This is roughly equivalent to nested for-loops in a generator expression.
For example:
.. code-block:: yaml+jinja
- name: Generate multiple hostnames
ansible.builtin.debug:
msg: "{{ ['foo', 'bar'] | product(['com']) | map('join', '.') | join(',') }}"
This would result in:
.. code-block:: json
{ "msg": "foo.com,bar.com" }
.. json_query_filter:
Selecting JSON data: JSON queries
---------------------------------
To select a single element or a data subset from a complex data structure in JSON format (for example, Ansible facts), use the ``json_query`` filter. The ``json_query`` filter lets you query a complex JSON structure and iterate over it using a loop structure.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
.. note:: You must manually install the **jmespath** dependency on the Ansible controller before using this filter. This filter is built upon **jmespath**, and you can use the same syntax. For examples, see `jmespath examples <https://jmespath.org/examples.html>`_.
Consider this data structure:
.. code-block:: json
{
"domain_definition": {
"domain": {
"cluster": [
{
"name": "cluster1"
},
{
"name": "cluster2"
}
],
"server": [
{
"name": "server11",
"cluster": "cluster1",
"port": "8080"
},
{
"name": "server12",
"cluster": "cluster1",
"port": "8090"
},
{
"name": "server21",
"cluster": "cluster2",
"port": "9080"
},
{
"name": "server22",
"cluster": "cluster2",
"port": "9090"
}
],
"library": [
{
"name": "lib1",
"target": "cluster1"
},
{
"name": "lib2",
"target": "cluster2"
}
]
}
}
}
To extract all clusters from this structure, you can use the following query:
.. code-block:: yaml+jinja
- name: Display all cluster names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.cluster[*].name') }}"
To extract all server names:
.. code-block:: yaml+jinja
- name: Display all server names
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[*].name') }}"
To extract ports from cluster1:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster1'].port"
.. note:: You can use a variable to make the query more readable.
To print out the ports from cluster1 in a comma separated string:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1 as a string
ansible.builtin.debug:
msg: "{{ domain_definition | community.general.json_query('domain.server[?cluster==`cluster1`].port') | join(', ') }}"
.. note:: In the example above, quoting literals using backticks avoids escaping quotes and maintains readability.
You can use YAML `single quote escaping <https://yaml.org/spec/current.html#id2534365>`_:
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query('domain.server[?cluster==''cluster1''].port') }}"
.. note:: Escaping single quotes within single quotes in YAML is done by doubling the single quote.
To get a hash map with all ports and names of a cluster:
.. code-block:: yaml+jinja
- name: Display all server ports and names from cluster1
ansible.builtin.debug:
var: item
loop: "{{ domain_definition | community.general.json_query(server_name_cluster1_query) }}"
vars:
server_name_cluster1_query: "domain.server[?cluster=='cluster2'].{name: name, port: port}"
To extract ports from all clusters with name starting with 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?starts_with(name,'server1')].port"
To extract ports from all clusters with name containing 'server1':
.. code-block:: yaml+jinja
- name: Display all ports from cluster1
ansible.builtin.debug:
msg: "{{ domain_definition | to_json | from_json | community.general.json_query(server_name_query) }}"
vars:
server_name_query: "domain.server[?contains(name,'server1')].port"
.. note:: while using ``starts_with`` and ``contains``, you have to use `` to_json | from_json `` filter for correct parsing of data structure.
Randomizing data
================
When you need a randomly generated value, use one of these filters.
.. _random_mac_filter:
Random MAC addresses
--------------------
.. versionadded:: 2.6
This filter can be used to generate a random MAC address from a string prefix.
.. note::
This filter has migrated to the `community.general <https://galaxy.ansible.com/community/general>`_ collection. Follow the installation instructions to install that collection.
To get a random MAC address from a string prefix starting with '52:54:00':
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac }}"
# => '52:54:00:ef:1c:03'
Note that if anything is wrong with the prefix string, the filter will issue an error.
.. versionadded:: 2.9
As of Ansible version 2.9, you can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:
.. code-block:: yaml+jinja
"{{ '52:54:00' | community.general.random_mac(seed=inventory_hostname) }}"
.. _random_filter_example:
Random items or numbers
-----------------------
The ``random`` filter in Ansible is an extension of the default Jinja2 random filter, and can be used to return a random item from a sequence of items or to generate a random number based on a range.
To get a random item from a list:
.. code-block:: yaml+jinja
"{{ ['a','b','c'] | random }}"
# => 'c'
To get a random number between 0 (inclusive) and a specified integer (exclusive):
.. code-block:: yaml+jinja
"{{ 60 | random }} * * * * root /script/from/cron"
# => '21 * * * * root /script/from/cron'
To get a random number from 0 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(step=10) }}
# => 70
To get a random number from 1 to 100 but in steps of 10:
.. code-block:: yaml+jinja
{{ 101 | random(1, 10) }}
# => 31
{{ 101 | random(start=1, step=10) }}
# => 51
You can initialize the random number generator from a seed to create random-but-idempotent numbers:
.. code-block:: yaml+jinja
"{{ 60 | random(seed=inventory_hostname) }} * * * * root /script/from/cron"
Shuffling a list
----------------
The ``shuffle`` filter randomizes an existing list, giving a different order every invocation.
To get a random list from an existing list:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle }}
# => ['c','a','b']
{{ ['a','b','c'] | shuffle }}
# => ['b','c','a']
You can initialize the shuffle generator from a seed to generate a random-but-idempotent order:
.. code-block:: yaml+jinja
{{ ['a','b','c'] | shuffle(seed=inventory_hostname) }}
# => ['b','a','c']
The shuffle filter returns a list whenever possible. If you use it with a non 'listable' item, the filter does nothing.
.. _list_filters:
Managing list variables
=======================
You can search for the minimum or maximum value in a list, or flatten a multi-level list.
To get the minimum value from list of numbers:
.. code-block:: yaml+jinja
{{ list1 | min }}
.. versionadded:: 2.11
To get the minimum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | min(attribute='val') }}
To get the maximum value from a list of numbers:
.. code-block:: yaml+jinja
{{ [3, 4, 2] | max }}
.. versionadded:: 2.11
To get the maximum value in a list of objects:
.. code-block:: yaml+jinja
{{ [{'val': 1}, {'val': 2}] | max(attribute='val') }}
.. versionadded:: 2.5
Flatten a list (same thing the `flatten` lookup does):
.. code-block:: yaml+jinja
{{ [3, [4, 2] ] | flatten }}
# => [3, 4, 2]
Flatten only the first level of a list (akin to the `items` lookup):
.. code-block:: yaml+jinja
{{ [3, [4, [2]] ] | flatten(levels=1) }}
# => [3, 4, [2]]
.. versionadded:: 2.11
Preserve nulls in a list, by default flatten removes them. :
.. code-block:: yaml+jinja
{{ [3, None, [4, [2]] ] | flatten(levels=1, skip_nulls=False) }}
# => [3, None, 4, [2]]
.. _set_theory_filters:
Selecting from sets or lists (set theory)
=========================================
You can select or combine items from sets or lists.
.. versionadded:: 1.4
To get a unique set from a list:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
{{ list1 | unique }}
# => [1, 2, 5, 3, 4, 10]
To get a union of two lists:
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | union(list2) }}
# => [1, 2, 5, 1, 3, 4, 10, 11, 99]
To get the intersection of 2 lists (unique list of all items in both):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | intersect(list2) }}
# => [1, 2, 5, 3, 4]
To get the difference of 2 lists (items in 1 that don't exist in 2):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | difference(list2) }}
# => [10]
To get the symmetric difference of 2 lists (items exclusive to each list):
.. code-block:: yaml+jinja
# list1: [1, 2, 5, 1, 3, 4, 10]
# list2: [1, 2, 3, 4, 5, 11, 99]
{{ list1 | symmetric_difference(list2) }}
# => [10, 11, 99]
.. _math_stuff:
Calculating numbers (math)
==========================
.. versionadded:: 1.9
You can calculate logs, powers, and roots of numbers with Ansible filters. Jinja2 provides other mathematical functions like abs() and round().
Get the logarithm (default is e):
.. code-block:: yaml+jinja
{{ 8 | log }}
# => 2.0794415416798357
Get the base 10 logarithm:
.. code-block:: yaml+jinja
{{ 8 | log(10) }}
# => 0.9030899869919435
Give me the power of 2! (or 5):
.. code-block:: yaml+jinja
{{ 8 | pow(5) }}
# => 32768.0
Square root, or the 5th:
.. code-block:: yaml+jinja
{{ 8 | root }}
# => 2.8284271247461903
{{ 8 | root(5) }}
# => 1.5157165665103982
Managing network interactions
=============================
These filters help you with common network tasks.
.. note::
These filters have migrated to the `ansible.netcommon <https://galaxy.ansible.com/ansible/netcommon>`_ collection. Follow the installation instructions to install that collection.
.. _ipaddr_filter:
IP address filters
------------------
.. versionadded:: 1.9
To test if a string is a valid IP address:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipaddr }}
You can also require a specific IP protocol version:
.. code-block:: yaml+jinja
{{ myvar | ansible.netcommon.ipv4 }}
{{ myvar | ansible.netcommon.ipv6 }}
IP address filter can also be used to extract specific information from an IP
address. For example, to get the IP address itself from a CIDR, you can use:
.. code-block:: yaml+jinja
{{ '192.0.2.1/24' | ansible.netcommon.ipaddr('address') }}
# => 192.0.2.1
More information about ``ipaddr`` filter and complete usage guide can be found
in :ref:`playbooks_filters_ipaddr`.
.. _network_filters:
Network CLI filters
-------------------
.. versionadded:: 2.4
To convert the output of a network device CLI command into structured JSON
output, use the ``parse_cli`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_cli('path/to/spec') }}
The ``parse_cli`` filter will load the spec file and pass the command output
through it, returning JSON output. The YAML spec file defines how to parse the CLI output.
The spec file should be valid formatted YAML. It defines how to parse the CLI
output and return JSON data. Below is an example of a valid spec file that
will parse the output from the ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
enabled: "{{ item.state != 'act/lshut' }}"
state: "{{ item.state }}"
keys:
vlans:
value: "{{ vlan }}"
items: "^(?P<vlan_id>\\d+)\\s+(?P<name>\\w+)\\s+(?P<state>active|act/lshut|suspended)"
state_static:
value: present
Another common use case for parsing CLI commands is to break a large command
into blocks that can be parsed. This can be done using the ``start_block`` and
``end_block`` directives to break the command into blocks that can be parsed.
.. code-block:: yaml
---
vars:
interface:
name: "{{ item[0].match[0] }}"
state: "{{ item[1].state }}"
mode: "{{ item[2].match[0] }}"
keys:
interfaces:
value: "{{ interface }}"
start_block: "^Ethernet.*$"
end_block: "^$"
items:
- "^(?P<name>Ethernet\\d\\/\\d*)"
- "admin state is (?P<state>.+),"
- "Port mode is (.+)"
The example above will parse the output of ``show interface`` into a list of
hashes.
The network filters also support parsing the output of a CLI command using the
TextFSM library. To parse the CLI output with TextFSM use the following
filter:
.. code-block:: yaml+jinja
{{ output.stdout[0] | ansible.netcommon.parse_cli_textfsm('path/to/fsm') }}
Use of the TextFSM filter requires the TextFSM library to be installed.
Network XML filters
-------------------
.. versionadded:: 2.5
To convert the XML output of a network device command into structured JSON
output, use the ``parse_xml`` filter:
.. code-block:: yaml+jinja
{{ output | ansible.netcommon.parse_xml('path/to/spec') }}
The ``parse_xml`` filter will load the spec file and pass the command output
through formatted as JSON.
The spec file should be valid formatted YAML. It defines how to parse the XML
output and return JSON data.
Below is an example of a valid spec file that
will parse the output from the ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The spec file above will return a JSON data structure that is a list of hashes
with the parsed VLAN information.
The same command could be parsed into a hash by using the key and values
directives. Here is an example of how to parse the output into a hash
value using the same ``show vlan | display xml`` command.
.. code-block:: yaml
---
vars:
vlan:
key: "{{ item.vlan_id }}"
values:
vlan_id: "{{ item.vlan_id }}"
name: "{{ item.name }}"
desc: "{{ item.desc }}"
enabled: "{{ item.state.get('inactive') != 'inactive' }}"
state: "{% if item.state.get('inactive') == 'inactive'%} inactive {% else %} active {% endif %}"
keys:
vlans:
value: "{{ vlan }}"
top: configuration/vlans/vlan
items:
vlan_id: vlan-id
name: name
desc: description
state: ".[@inactive='inactive']"
The value of ``top`` is the XPath relative to the XML root node.
In the example XML output given below, the value of ``top`` is ``configuration/vlans/vlan``,
which is an XPath expression relative to the root node (<rpc-reply>).
``configuration`` in the value of ``top`` is the outer most container node, and ``vlan``
is the inner-most container node.
``items`` is a dictionary of key-value pairs that map user-defined names to XPath expressions
that select elements. The Xpath expression is relative to the value of the XPath value contained in ``top``.
For example, the ``vlan_id`` in the spec file is a user defined name and its value ``vlan-id`` is the
relative to the value of XPath in ``top``
Attributes of XML tags can be extracted using XPath expressions. The value of ``state`` in the spec
is an XPath expression used to get the attributes of the ``vlan`` tag in output XML.:
.. code-block:: none
<rpc-reply>
<configuration>
<vlans>
<vlan inactive="inactive">
<name>vlan-1</name>
<vlan-id>200</vlan-id>
<description>This is vlan-1</description>
</vlan>
</vlans>
</configuration>
</rpc-reply>
.. note::
For more information on supported XPath expressions, see `XPath Support <https://docs.python.org/3/library/xml.etree.elementtree.html#xpath-support>`_.
Network VLAN filters
--------------------
.. versionadded:: 2.8
Use the ``vlan_parser`` filter to transform an unsorted list of VLAN integers into a
sorted string list of integers according to IOS-like VLAN list rules. This list has the following properties:
* Vlans are listed in ascending order.
* Three or more consecutive VLANs are listed with a dash.
* The first line of the list can be first_line_len characters long.
* Subsequent list lines can be other_line_len characters.
To sort a VLAN list:
.. code-block:: yaml+jinja
{{ [3003, 3004, 3005, 100, 1688, 3002, 3999] | ansible.netcommon.vlan_parser }}
This example renders the following sorted list:
.. code-block:: text
['100,1688,3002-3005,3999']
Another example Jinja template:
.. code-block:: yaml+jinja
{% set parsed_vlans = vlans | ansible.netcommon.vlan_parser %}
switchport trunk allowed vlan {{ parsed_vlans[0] }}
{% for i in range (1, parsed_vlans | count) %}
switchport trunk allowed vlan add {{ parsed_vlans[i] }}
{% endfor %}
This allows for dynamic generation of VLAN lists on a Cisco IOS tagged interface. You can store an exhaustive raw list of the exact VLANs required for an interface and then compare that to the parsed IOS output that would actually be generated for the configuration.
.. _hash_filters:
Hashing and encrypting strings and passwords
==============================================
.. versionadded:: 1.9
To get the sha1 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('sha1') }}
# => "b444ac06613fc8d63795be9ad0beaf55011936ac"
To get the md5 hash of a string:
.. code-block:: yaml+jinja
{{ 'test1' | hash('md5') }}
# => "5a105e8b9d40e1329780d62ea2265d8a"
Get a string checksum:
.. code-block:: yaml+jinja
{{ 'test2' | checksum }}
# => "109f4b3c50d7b0df729d299bc6f8e9ef9066971f"
Other hashes (platform dependent):
.. code-block:: yaml+jinja
{{ 'test2' | hash('blowfish') }}
To get a sha512 password hash (random salt):
.. code-block:: yaml+jinja
{{ 'passwordsaresecret' | password_hash('sha512') }}
# => "$6$UIv3676O/ilZzWEE$ktEfFF19NQPF2zyxqxGkAceTnbEgpEKuGBtk6MlU4v2ZorWaVQUMyurgmHCh2Fr4wpmQ/Y.AlXMJkRnIS4RfH/"
To get a sha256 password hash with a specific salt:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt') }}
# => "$5$mysecretsalt$ReKNyDYjkKNqRVwouShhsEqZ3VOE8eoVO4exihOfvG4"
An idempotent method to generate unique hashes per system is to use a salt that is consistent between runs:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha512', 65534 | random(seed=inventory_hostname) | string) }}
# => "$6$43927$lQxPKz2M2X.NWO.gK.t7phLwOKQMcSq72XxDZQ0XzYV6DlL1OD72h417aj16OnHTGxNzhftXJQBcjbunLEepM0"
Hash types available depend on the control system running Ansible, 'hash' depends on `hashlib <https://docs.python.org/3.8/library/hashlib.html>`_, password_hash depends on `passlib <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.html>`_. The `crypt <https://docs.python.org/3.8/library/crypt.html>`_ is used as a fallback if ``passlib`` is not installed.
.. versionadded:: 2.7
Some hash types allow providing a rounds parameter:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=10000) }}
# => "$5$rounds=10000$mysecretsalt$Tkm80llAxD4YHll6AgNIztKn0vzAACsuuEfYeGP7tm7"
The filter `password_hash` produces different results depending on whether you installed `passlib` or not.
To ensure idempotency, specify `rounds` to be neither `crypt`'s nor `passlib`'s default, which is `5000` for `crypt` and a variable value (`535000` for sha256, `656000` for sha512) for `passlib`:
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('sha256', 'mysecretsalt', rounds=5001) }}
# => "$5$rounds=5001$mysecretsalt$wXcTWWXbfcR8er5IVf7NuquLvnUA6s8/qdtOhAZ.xN."
Hash type 'blowfish' (BCrypt) provides the facility to specify the version of the BCrypt algorithm.
.. code-block:: yaml+jinja
{{ 'secretpassword' | password_hash('blowfish', '1234567890123456789012', ident='2b') }}
# => "$2b$12$123456789012345678901uuJ4qFdej6xnWjOQT.FStqfdoY8dYUPC"
.. note::
The parameter is only available for `blowfish (BCrypt) <https://passlib.readthedocs.io/en/stable/lib/passlib.hash.bcrypt.html#passlib.hash.bcrypt>`_.
Other hash types will simply ignore this parameter.
Valid values for this parameter are: ['2', '2a', '2y', '2b']
.. versionadded:: 2.12
You can also use the Ansible :ref:`vault <vault>` filter to encrypt data:
.. code-block:: yaml+jinja
# simply encrypt my key in a vault
vars:
myvaultedkey: "{{ keyrawdata|vault(passphrase) }}"
- name: save templated vaulted data
template: src=dump_template_data.j2 dest=/some/key/vault.txt
vars:
mysalt: '{{ 2**256|random(seed=inventory_hostname) }}'
template_data: '{{ secretdata|vault(vaultsecret, salt=mysalt) }}'
And then decrypt it using the unvault filter:
.. code-block:: yaml+jinja
# simply decrypt my key from a vault
vars:
mykey: "{{ myvaultedkey|unvault(passphrase) }}"
- name: save templated unvaulted data
template: src=dump_template_data.j2 dest=/some/key/clear.txt
vars:
template_data: '{{ secretdata|unvault(vaultsecret) }}'
.. _other_useful_filters:
Manipulating text
=================
Several filters work with text, including URLs, file names, and path names.
.. _comment_filter:
Adding comments to files
------------------------
The ``comment`` filter lets you create comments in a file from text in a template, with a variety of comment styles. By default Ansible uses ``#`` to start a comment line and adds a blank comment line above and below your comment text. For example the following:
.. code-block:: yaml+jinja
{{ "Plain style (default)" | comment }}
produces this output:
.. code-block:: text
#
# Plain style (default)
#
Ansible offers styles for comments in C (``//...``), C block
(``/*...*/``), Erlang (``%...``) and XML (``<!--...-->``):
.. code-block:: yaml+jinja
{{ "C style" | comment('c') }}
{{ "C block style" | comment('cblock') }}
{{ "Erlang style" | comment('erlang') }}
{{ "XML style" | comment('xml') }}
You can define a custom comment character. This filter:
.. code-block:: yaml+jinja
{{ "My Special Case" | comment(decoration="! ") }}
produces:
.. code-block:: text
!
! My Special Case
!
You can fully customize the comment style:
.. code-block:: yaml+jinja
{{ "Custom style" | comment('plain', prefix='#######\n#', postfix='#\n#######\n ###\n #') }}
That creates the following output:
.. code-block:: text
#######
#
# Custom style
#
#######
###
#
The filter can also be applied to any Ansible variable. For example to
make the output of the ``ansible_managed`` variable more readable, we can
change the definition in the ``ansible.cfg`` file to this:
.. code-block:: ini
[defaults]
ansible_managed = This file is managed by Ansible.%n
template: {file}
date: %Y-%m-%d %H:%M:%S
user: {uid}
host: {host}
and then use the variable with the `comment` filter:
.. code-block:: yaml+jinja
{{ ansible_managed | comment }}
which produces this output:
.. code-block:: sh
#
# This file is managed by Ansible.
#
# template: /home/ansible/env/dev/ansible_managed/roles/role1/templates/test.j2
# date: 2015-09-10 11:02:58
# user: ansible
# host: myhost
#
URLEncode Variables
-------------------
The ``urlencode`` filter quotes data for use in a URL path or query using UTF-8:
.. code-block:: yaml+jinja
{{ 'Trollhättan' | urlencode }}
# => 'Trollh%C3%A4ttan'
Splitting URLs
--------------
.. versionadded:: 2.4
The ``urlsplit`` filter extracts the fragment, hostname, netloc, password, path, port, query, scheme, and username from an URL. With no arguments, returns a dictionary of all the fields:
.. code-block:: yaml+jinja
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('hostname') }}
# => 'www.acme.com'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('netloc') }}
# => 'user:[email protected]:9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('username') }}
# => 'user'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('password') }}
# => 'password'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('path') }}
# => '/dir/index.html'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('port') }}
# => '9000'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('scheme') }}
# => 'http'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('query') }}
# => 'query=term'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit('fragment') }}
# => 'fragment'
{{ "http://user:[email protected]:9000/dir/index.html?query=term#fragment" | urlsplit }}
# =>
# {
# "fragment": "fragment",
# "hostname": "www.acme.com",
# "netloc": "user:[email protected]:9000",
# "password": "password",
# "path": "/dir/index.html",
# "port": 9000,
# "query": "query=term",
# "scheme": "http",
# "username": "user"
# }
Searching strings with regular expressions
------------------------------------------
To search in a string or extract parts of a string with a regular expression, use the ``regex_search`` filter:
.. code-block:: yaml+jinja
# Extracts the database name from a string
{{ 'server1/database42' | regex_search('database[0-9]+') }}
# => 'database42'
# Example for a case insensitive search in multiline mode
{{ 'foo\nBAR' | regex_search('^bar', multiline=True, ignorecase=True) }}
# => 'BAR'
# Extracts server and database id from a string
{{ 'server1/database42' | regex_search('server([0-9]+)/database([0-9]+)', '\\1', '\\2') }}
# => ['1', '42']
# Extracts dividend and divisor from a division
{{ '21/42' | regex_search('(?P<dividend>[0-9]+)/(?P<divisor>[0-9]+)', '\\g<dividend>', '\\g<divisor>') }}
# => ['21', '42']
The ``regex_search`` filter returns an empty string if it cannot find a match:
.. code-block:: yaml+jinja
{{ 'ansible' | regex_search('foobar') }}
# => ''
.. note::
The ``regex_search`` filter returns ``None`` when used in a Jinja expression (for example in conjunction with operators, other filters, and so on). See the two examples below.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') == '' }}
# => False
{{ 'ansible' | regex_search('foobar') is none }}
# => True
This is due to historic behavior and the custom re-implementation of some of the Jinja internals in Ansible. Enable the ``jinja2_native`` setting if you want the ``regex_search`` filter to always return ``None`` if it cannot find a match. See :ref:`jinja2_faqs` for details.
To extract all occurrences of regex matches in a string, use the ``regex_findall`` filter:
.. code-block:: yaml+jinja
# Returns a list of all IPv4 addresses in the string
{{ 'Some DNS servers are 8.8.8.8 and 8.8.4.4' | regex_findall('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}
# => ['8.8.8.8', '8.8.4.4']
# Returns all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_findall('^.ar$', multiline=True, ignorecase=True) }}
# => ['CAR', 'tar', 'bar']
To replace text in a string with regex, use the ``regex_replace`` filter:
.. code-block:: yaml+jinja
# Convert "ansible" to "able"
{{ 'ansible' | regex_replace('^a.*i(.*)$', 'a\\1') }}
# => 'able'
# Convert "foobar" to "bar"
{{ 'foobar' | regex_replace('^f.*o(.*)$', '\\1') }}
# => 'bar'
# Convert "localhost:80" to "localhost, 80" using named groups
{{ 'localhost:80' | regex_replace('^(?P<host>.+):(?P<port>\\d+)$', '\\g<host>, \\g<port>') }}
# => 'localhost, 80'
# Convert "localhost:80" to "localhost"
{{ 'localhost:80' | regex_replace(':80') }}
# => 'localhost'
# Comment all lines that end with "ar"
{{ 'CAR\ntar\nfoo\nbar\n' | regex_replace('^(.ar)$', '#\\1', multiline=True, ignorecase=True) }}
# => '#CAR\n#tar\nfoo\n#bar\n'
.. note::
If you want to match the whole string and you are using ``*`` make sure to always wraparound your regular expression with the start/end anchors. For example ``^(.*)$`` will always match only one result, while ``(.*)`` on some Python versions will match the whole string and an empty string at the end, which means it will make two replacements:
.. code-block:: yaml+jinja
# add "https://" prefix to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '(.+)', 'https://\\1') | list }}
{{ hosts | map('regex_replace', '^', 'https://') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', 'https://\\1') | list }}
# append ':80' to each item in a list
GOOD:
{{ hosts | map('regex_replace', '^(.*)$', '\\1:80') | list }}
{{ hosts | map('regex_replace', '(.+)', '\\1:80') | list }}
{{ hosts | map('regex_replace', '$', ':80') | list }}
BAD:
{{ hosts | map('regex_replace', '(.*)', '\\1:80') | list }}
.. note::
Prior to ansible 2.0, if ``regex_replace`` filter was used with variables inside YAML arguments (as opposed to simpler 'key=value' arguments), then you needed to escape backreferences (for example, ``\\1``) with 4 backslashes (``\\\\``) instead of 2 (``\\``).
.. versionadded:: 2.0
To escape special characters within a standard Python regex, use the ``regex_escape`` filter (using the default ``re_type='python'`` option):
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o\(\.\*\)\$'
{{ '^f.*o(.*)$' | regex_escape() }}
.. versionadded:: 2.8
To escape special characters within a POSIX basic regex, use the ``regex_escape`` filter with the ``re_type='posix_basic'`` option:
.. code-block:: yaml+jinja
# convert '^f.*o(.*)$' to '\^f\.\*o(\.\*)\$'
{{ '^f.*o(.*)$' | regex_escape('posix_basic') }}
Managing file names and path names
----------------------------------
To get the last name of a file path, like 'foo.txt' out of '/etc/asdf/foo.txt':
.. code-block:: yaml+jinja
{{ path | basename }}
To get the last name of a windows style file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_basename }}
To separate the windows drive letter from the rest of a file path (new in version 2.0):
.. code-block:: yaml+jinja
{{ path | win_splitdrive }}
To get only the windows drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | first }}
To get the rest of the path without the drive letter:
.. code-block:: yaml+jinja
{{ path | win_splitdrive | last }}
To get the directory from a path:
.. code-block:: yaml+jinja
{{ path | dirname }}
To get the directory from a windows path (new version 2.0):
.. code-block:: yaml+jinja
{{ path | win_dirname }}
To expand a path containing a tilde (`~`) character (new in version 1.5):
.. code-block:: yaml+jinja
{{ path | expanduser }}
To expand a path containing environment variables:
.. code-block:: yaml+jinja
{{ path | expandvars }}
.. note:: `expandvars` expands local variables; using it on remote paths can lead to errors.
.. versionadded:: 2.6
To get the real path of a link (new in version 1.8):
.. code-block:: yaml+jinja
{{ path | realpath }}
To get the relative path of a link, from a start point (new in version 1.7):
.. code-block:: yaml+jinja
{{ path | relpath('/etc') }}
To get the root and extension of a path or file name (new in version 2.0):
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be ('nginx', '.conf')
{{ path | splitext }}
The ``splitext`` filter always returns a pair of strings. The individual components can be accessed by using the ``first`` and ``last`` filters:
.. code-block:: yaml+jinja
# with path == 'nginx.conf' the return would be 'nginx'
{{ path | splitext | first }}
# with path == 'nginx.conf' the return would be '.conf'
{{ path | splitext | last }}
To join one or more path components:
.. code-block:: yaml+jinja
{{ ('/etc', path, 'subdir', file) | path_join }}
.. versionadded:: 2.10
Manipulating strings
====================
To add quotes for shell usage:
.. code-block:: yaml+jinja
- name: Run a shell command
ansible.builtin.shell: echo {{ string_value | quote }}
To concatenate a list into a string:
.. code-block:: yaml+jinja
{{ list | join(" ") }}
To split a string into a list:
.. code-block:: yaml+jinja
{{ csv_string | split(",") }}
.. versionadded:: 2.11
To work with Base64 encoded strings:
.. code-block:: yaml+jinja
{{ encoded | b64decode }}
{{ decoded | string | b64encode }}
As of version 2.6, you can define the type of encoding to use, the default is ``utf-8``:
.. code-block:: yaml+jinja
{{ encoded | b64decode(encoding='utf-16-le') }}
{{ decoded | string | b64encode(encoding='utf-16-le') }}
.. note:: The ``string`` filter is only required for Python 2 and ensures that text to encode is a unicode string. Without that filter before b64encode the wrong value will be encoded.
.. versionadded:: 2.6
Managing UUIDs
==============
To create a namespaced UUIDv5:
.. code-block:: yaml+jinja
{{ string | to_uuid(namespace='11111111-2222-3333-4444-555555555555') }}
.. versionadded:: 2.10
To create a namespaced UUIDv5 using the default Ansible namespace '361E6D51-FAEC-444A-9079-341386DA8E2E':
.. code-block:: yaml+jinja
{{ string | to_uuid }}
.. versionadded:: 1.9
To make use of one attribute from each item in a list of complex variables, use the :func:`Jinja2 map filter <jinja2:jinja-filters.map>`:
.. code-block:: yaml+jinja
# get a comma-separated list of the mount points (for example, "/,/mnt/stuff") on a host
{{ ansible_mounts | map(attribute='mount') | join(',') }}
Handling dates and times
========================
To get a date object from a string use the `to_datetime` filter:
.. code-block:: yaml+jinja
# Get total amount of seconds between two dates. Default date format is %Y-%m-%d %H:%M:%S but you can pass your own format
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).total_seconds() }}
# Get remaining seconds after delta has been calculated. NOTE: This does NOT convert years, days, hours, and so on to seconds. For that, use total_seconds()
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2016-08-14 18:00:00" | to_datetime)).seconds }}
# This expression evaluates to "12" and not "132". Delta is 2 hours, 12 seconds
# get amount of days between two dates. This returns only number of days and discards remaining hours, minutes, and seconds
{{ (("2016-08-14 20:00:12" | to_datetime) - ("2015-12-25" | to_datetime('%Y-%m-%d'))).days }}
.. note:: For a full list of format codes for working with python date format strings, see https://docs.python.org/3/library/datetime.html#strftime-and-strptime-behavior.
.. versionadded:: 2.4
To format a date using a string (like with the shell date command), use the "strftime" filter:
.. code-block:: yaml+jinja
# Display year-month-day
{{ '%Y-%m-%d' | strftime }}
# => "2021-03-19"
# Display hour:min:sec
{{ '%H:%M:%S' | strftime }}
# => "21:51:04"
# Use ansible_date_time.epoch fact
{{ '%Y-%m-%d %H:%M:%S' | strftime(ansible_date_time.epoch) }}
# => "2021-03-19 21:54:09"
# Use arbitrary epoch value
{{ '%Y-%m-%d' | strftime(0) }} # => 1970-01-01
{{ '%Y-%m-%d' | strftime(1441357287) }} # => 2015-09-04
.. versionadded:: 2.13
strftime takes an optional utc argument, defaulting to False, meaning times are in the local timezone::
{{ '%H:%M:%S' | strftime }} # time now in local timezone
{{ '%H:%M:%S' | strftime(utc=True) }} # time now in UTC
.. note:: To get all string possibilities, check https://docs.python.org/3/library/time.html#time.strftime
Getting Kubernetes resource names
=================================
.. note::
These filters have migrated to the `kubernetes.core <https://galaxy.ansible.com/kubernetes/core>`_ collection. Follow the installation instructions to install that collection.
Use the "k8s_config_resource_name" filter to obtain the name of a Kubernetes ConfigMap or Secret,
including its hash:
.. code-block:: yaml+jinja
{{ configmap_resource_definition | kubernetes.core.k8s_config_resource_name }}
This can then be used to reference hashes in Pod specifications:
.. code-block:: yaml+jinja
my_secret:
kind: Secret
metadata:
name: my_secret_name
deployment_resource:
kind: Deployment
spec:
template:
spec:
containers:
- envFrom:
- secretRef:
name: {{ my_secret | kubernetes.core.k8s_config_resource_name }}
.. versionadded:: 2.8
.. _PyYAML library: https://pyyaml.org/
.. _PyYAML documentation: https://pyyaml.org/wiki/PyYAMLDocumentation
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
:ref:`playbooks_loops`
Looping in playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`tips_and_tricks`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,914 |
Docs: Replace occurrences of "See http://" with a descriptive label in porting guides
|
### Summary
Accessibility guidelines recommend we do not use "See http://<website>" in documentation, but instead provide context around this for screen readers etc.
In this issue, we've identified 4 files that use this convention in the porting guides. For each occurrence, replace it with an RST link to [external web page](https://sublime-and-sphinx-guide.readthedocs.io/en/latest/references.html#links-to-external-web-pages) .
Specifically, use the format \`descriptive phrase \<url\>\`_
List of affected RST pages are in a follow-on comment. You can choose to fix one at a time, using the Edit on GitHub link at the top of the RST page, or in one PR to fix them both.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/porting_guides/porting_guide_5.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78914
|
https://github.com/ansible/ansible/pull/78954
|
ba3264253859b95d621727259615546c0927ca63
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
| 2022-09-28T19:49:43Z |
python
| 2022-10-03T20:35:06Z |
docs/docsite/rst/porting_guides/porting_guide_5.rst
|
..
THIS DOCUMENT IS AUTOMATICALLY GENERATED BY ANTSIBULL! PLEASE DO NOT EDIT MANUALLY! (YOU PROBABLY WANT TO EDIT porting_guide_core_2.12.rst)
.. _porting_5_guide:
=======================
Ansible 5 Porting Guide
=======================
.. contents::
:local:
:depth: 2
Ansible 5 is based on Ansible-core 2.12.
We suggest you read this page along with the `Ansible 5 Changelog <https://github.com/ansible-community/ansible-build-data/blob/main/5/CHANGELOG-v5.rst>`_ to understand what updates you may need to make.
Playbook
========
* When calling tasks and setting ``async``, setting ``ANSIBLE_ASYNC_DIR`` under ``environment:`` is no longer valid. Instead, use the shell configuration variable ``async_dir``, for example by setting ``ansible_async_dir``:
.. code-block:: yaml
tasks:
- dnf:
name: '*'
state: latest
async: 300
poll: 5
vars:
ansible_async_dir: /path/to/my/custom/dir
* The ``undef()`` function is added to the templating environment for creating undefined variables directly in a template. Optionally, a hint may be provided for variables which are intended to be overridden.
.. code-block:: yaml
vars:
old: "{{ undef }}"
new: "{{ undef() }}"
new_with_hint: "{{ undef(hint='You must override this variable') }}"
Python Interpreter Discovery
============================
The default value of ``INTERPRETER_PYTHON`` changed to ``auto``. The list of Python interpreters in ``INTERPRETER_PYTHON_FALLBACK`` changed to prefer Python 3 over Python 2. The combination of these two changes means the new default behavior is to quietly prefer Python 3 over Python 2 on remote hosts. Previously a deprecation warning was issued in situations where interpreter discovery would have used Python 3 but the interpreter was set to ``/usr/bin/python``.
``INTERPRETER_PYTHON_FALLBACK`` can be changed from the default list of interpreters by setting the ``ansible_interpreter_python_fallback`` variable.
See :ref:`interpreter discovery documentation <interpreter_discovery>` for more details.
Command Line
============
* Python 3.8 on the controller node is a hard requirement for this release. The command line scripts will not function with a lower Python version.
* ``ansible-vault`` no longer supports ``PyCrypto`` and requires ``cryptography``.
Deprecated
==========
* Python 2.6 on the target node is deprecated in this release. ``ansible-core`` 2.13 will remove support for Python 2.6.
* Bare variables in conditionals: ``when`` conditionals no longer automatically parse string booleans such as ``"true"`` and ``"false"`` into actual booleans. Any variable containing a non-empty string is considered true. This was previously configurable with the ``CONDITIONAL_BARE_VARS`` configuration option (and the ``ANSIBLE_CONDITIONAL_BARE_VARS`` environment variable). This setting no longer has any effect. Users can work around the issue by using the ``|bool`` filter:
.. code-block:: yaml
vars:
teardown: 'false'
tasks:
- include_tasks: teardown.yml
when: teardown | bool
- include_tasks: provision.yml
when: not teardown | bool
* The ``_remote_checksum()`` method in ``ActionBase`` is deprecated. Any action plugin using this method should use ``_execute_remote_stat()`` instead.
Modules
=======
* ``cron`` now requires ``name`` to be specified in all cases.
* ``cron`` no longer allows a ``reboot`` parameter. Use ``special_time: reboot`` instead.
* ``hostname`` - On FreeBSD, the ``before`` result will no longer be ``"temporarystub"`` if permanent hostname file does not exist. It will instead be ``""`` (empty string) for consistency with other systems.
* ``hostname`` - On OpenRC and Solaris based systems, the ``before`` result will no longer be ``"UNKNOWN"`` if the permanent hostname file does not exist. It will instead be ``""`` (empty string) for consistency with other systems.
* ``pip`` now uses the ``pip`` Python module installed for the Ansible module's Python interpreter, if available, unless ``executable`` or ``virtualenv`` were specified.
Modules removed
---------------
The following modules no longer exist:
* No notable changes
Deprecation notices
-------------------
No notable changes
Noteworthy module changes
-------------------------
No notable changes
Plugins
=======
* ``unique`` filter with Jinja2 < 2.10 is case-sensitive and now raise coherently an error if ``case_sensitive=False`` instead of when ``case_sensitive=True``.
* Set theory filters (``intersect``, ``difference``, ``symmetric_difference`` and ``union``) are now case-sensitive. Explicitly use ``case_sensitive=False`` to keep previous behavior. Note: with Jinja2 < 2.10, the filters were already case-sensitive by default.
* ``password_hash``` now uses ``passlib`` defaults when an option is unspecified, e.g. ``bcrypt_sha256`` now default to the "2b" format and if the "2a" format is required it must be specified.
Porting custom scripts
======================
No notable changes
Networking
==========
No notable changes
Porting Guide for v5.9.0
========================
Added Collections
-----------------
- cisco.dnac (version 6.4.0)
- community.sap_libs (version 1.1.0)
Major Changes
-------------
fortinet.fortios
~~~~~~~~~~~~~~~~
- Support FortiOS 7.0.2, 7.0.3, 7.0.4, 7.0.5.
Deprecated Features
-------------------
- The collection ``community.sap`` has been renamed to ``community.sap_libs``. For now both collections are included in Ansible. The content in ``community.sap`` will be replaced with deprecated redirects to the new collection in Ansible 7.0.0, and these redirects will eventually be removed from Ansible. Please update your FQCNs for ``community.sap``.
community.docker
~~~~~~~~~~~~~~~~
- Support for Ansible 2.9 and ansible-base 2.10 is deprecated, and will be removed in the next major release (community.docker 3.0.0). Some modules might still work with these versions afterwards, but we will no longer keep compatibility code that was needed to support them (https://github.com/ansible-collections/community.docker/pull/361).
- The dependency on docker-compose for Execution Environments is deprecated and will be removed in community.docker 3.0.0. The `Python docker-compose library <https://pypi.org/project/docker-compose/>`__ is unmaintained and can cause dependency issues. You can manually still install it in an Execution Environment when needed (https://github.com/ansible-collections/community.docker/pull/373).
- Various modules - the default of ``tls_hostname`` that was supposed to be removed in community.docker 2.0.0 will now be removed in version 3.0.0 (https://github.com/ansible-collections/community.docker/pull/362).
- docker_stack - the return values ``out`` and ``err`` that were supposed to be removed in community.docker 2.0.0 will now be removed in version 3.0.0 (https://github.com/ansible-collections/community.docker/pull/362).
Porting Guide for v5.8.0
========================
Added Collections
-----------------
- vmware.vmware_rest (version 2.1.5)
Breaking Changes
----------------
vmware.vmware_rest
~~~~~~~~~~~~~~~~~~
- The vmware_rest 2.0.0 support vSphere 7.0.2 onwards.
- vcenter_vm_storage_policy - the format of the ``disks`` parameter has changed.
- vcenter_vm_storage_policy - the module has a new mandatory paramter: ``vm_home``.
Major Changes
-------------
community.mysql
~~~~~~~~~~~~~~~
- The community.mysql collection no longer supports ``Ansible 2.9`` and ``ansible-base 2.10``. While we take no active measures to prevent usage and there are no plans to introduce incompatible code to the modules, we will stop testing against ``Ansible 2.9`` and ``ansible-base 2.10``. Both will very soon be End of Life and if you are still using them, you should consider upgrading to the ``latest Ansible / ansible-core 2.11 or later`` as soon as possible (https://github.com/ansible-collections/community.mysql/pull/343).
community.postgresql
~~~~~~~~~~~~~~~~~~~~
- The community.postgresql collection no longer supports ``Ansible 2.9`` and ``ansible-base 2.10``. While we take no active measures to prevent usage and there are no plans to introduce incompatible code to the modules, we will stop testing against ``Ansible 2.9`` and ``ansible-base 2.10``. Both will very soon be End of Life and if you are still using them, you should consider upgrading to the ``latest Ansible / ansible-core 2.11 or later`` as soon as possible (https://github.com/ansible-collections/community.postgresql/pull/245).
Deprecated Features
-------------------
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- token_validate options - the shared auth option ``token_validate`` will change its default from ``True`` to ``False`` in community.hashi_vault version 4.0.0. The ``vault_login`` lookup and module will keep the default value of ``True`` (https://github.com/ansible-collections/community.hashi_vault/issues/248).
community.network
~~~~~~~~~~~~~~~~~
- Support for Ansible 2.9 and ansible-base 2.10 is deprecated, and will be removed in the next major release (community.network 4.0.0) this spring. While most content will probably still work with ansible-base 2.10, we will remove symbolic links for modules and action plugins, which will make it impossible to use them with Ansible 2.9 anymore. Please use community.network 3.x.y with Ansible 2.9 and ansible-base 2.10, as these releases will continue to support Ansible 2.9 and ansible-base 2.10 even after they are End of Life (https://github.com/ansible-community/community-topics/issues/50, https://github.com/ansible-collections/community.network/pull/382).
vmware.vmware_rest
~~~~~~~~~~~~~~~~~~
- vcenter_vm_storage_policy_compliance - drop the module, it returns 404 error.
- vcenter_vm_tools - remove the ``upgrade`` state.
- vcenter_vm_tools_installer - remove the module from the collection.
Porting Guide for v5.7.0
========================
Major Changes
-------------
community.postgresql
~~~~~~~~~~~~~~~~~~~~
- postgresql_user - the ``priv`` argument has been deprecated and will be removed in ``community.postgresql 3.0.0``. Please use the ``postgresql_privs`` module to grant/revoke privileges instead (https://github.com/ansible-collections/community.postgresql/issues/212).
fortinet.fortios
~~~~~~~~~~~~~~~~
- Support FortiOS 7.0.2, 7.0.3, 7.0.4, 7.0.5.
Deprecated Features
-------------------
community.general
~~~~~~~~~~~~~~~~~
- nmcli - deprecate default hairpin mode for a bridge. This so we can change it to ``false`` in community.general 7.0.0, as this is also the default in ``nmcli`` (https://github.com/ansible-collections/community.general/pull/4334).
- proxmox inventory plugin - the current default ``true`` of the ``want_proxmox_nodes_ansible_host`` option has been deprecated. The default will change to ``false`` in community.general 6.0.0. To keep the current behavior, explicitly set ``want_proxmox_nodes_ansible_host`` to ``true`` in your inventory configuration. We suggest to already switch to the new behavior by explicitly setting it to ``false``, and by using ``compose:`` to set ``ansible_host`` to the correct value. See the examples in the plugin documentation for details (https://github.com/ansible-collections/community.general/pull/4466).
Porting Guide for v5.6.0
========================
Added Collections
-----------------
- community.sap (version 1.0.0)
Deprecated Features
-------------------
cisco.ios
~~~~~~~~~
- Deprecates lldp module.
Porting Guide for v5.5.0
========================
Known Issues
------------
community.general
~~~~~~~~~~~~~~~~~
- pacman - ``update_cache`` cannot differentiate between up to date and outdated package lists and will report ``changed`` in both situations (https://github.com/ansible-collections/community.general/pull/4318).
- pacman - binaries specified in the ``executable`` parameter must support ``--print-format`` in order to be used by this module. In particular, AUR helper ``yay`` is known not to currently support it (https://github.com/ansible-collections/community.general/pull/4312).
Deprecated Features
-------------------
community.general
~~~~~~~~~~~~~~~~~
- pacman - from community.general 5.0.0 on, the ``changed`` status of ``update_cache`` will no longer be ignored if ``name`` or ``upgrade`` is specified. To keep the old behavior, add something like ``register: result`` and ``changed_when: result.packages | length > 0`` to your task (https://github.com/ansible-collections/community.general/pull/4329).
Porting Guide for v5.4.0
========================
Major Changes
-------------
chocolatey.chocolatey
~~~~~~~~~~~~~~~~~~~~~
- win_chocolatey - Added choco_args option to pass additional arguments directly to Chocolatey.
vyos.vyos
~~~~~~~~~
- Add 'pool' as value to server key in ntp_global.
Deprecated Features
-------------------
cisco.ios
~~~~~~~~~
- `ios_acls` - Deprecated fragment attribute added boolean alternate as enable_fragment.
Porting Guide for v5.3.0
========================
Major Changes
-------------
f5networks.f5_modules
~~~~~~~~~~~~~~~~~~~~~
- bigip_device_info - pagination logic has also been added to help with api stability.
- bigip_device_info - the module no longer gathers information from all partitions on device. This change will stabalize the module by gathering resources only from the given partition and prevent the module from gathering way too much information that might result in crashing.
Deprecated Features
-------------------
community.general
~~~~~~~~~~~~~~~~~
- mail callback plugin - not specifying ``sender`` is deprecated and will be disallowed in community.general 6.0.0 (https://github.com/ansible-collections/community.general/pull/4140).
Porting Guide for v5.2.0
========================
Known Issues
------------
dellemc.openmanage
~~~~~~~~~~~~~~~~~~
- idrac_user - Issue(192043) The module may error out with the message ``unable to perform the import or export operation because there are pending attribute changes or a configuration job is in progress``. Wait for the job to complete and run the task again.
- ome_application_alerts_smtp - Issue(212310) - The module does not provide a proper error message if the destination_address is more than 255 characters.
- ome_application_alerts_syslog - Issue(215374) - The module does not provide a proper error message if the destination_address is more than 255 characters.
- ome_device_local_access_configuration - Issue(215035) - The module reports ``Successfully updated the local access setting`` if an unsupported value is provided for the parameter timeout_limit. However, this value is not actually applied on OpenManage Enterprise Modular.
- ome_device_local_access_configuration - Issue(217865) - The module does not display a proper error message if an unsupported value is provided for the user_defined and lcd_language parameters.
- ome_device_network_services - Issue(212681) - The module does not provide a proper error message if unsupported values are provided for the parameters- port_number, community_name, max_sessions, max_auth_retries, and idle_timeout.
- ome_device_power_settings - Issue(212679) - The module errors out with the following message if the value provided for the parameter ``power_cap`` is not within the supported range of 0 to 32767, ``Unable to complete the request because PowerCap does not exist or is not applicable for the resource URI.``
- ome_smart_fabric_uplink - Issue(186024) - The module does not allow the creation of multiple uplinks of the same name even though it is supported by OpenManage Enterprise Modular. If an uplink is created using the same name as an existing uplink, the existing uplink is modified.
purestorage.flasharray
~~~~~~~~~~~~~~~~~~~~~~
- purefa_admin - Once `max_login` and `lockout` have been set there is currently no way to rest these to zero except through the FlashArray GUI
Major Changes
-------------
cisco.meraki
~~~~~~~~~~~~
- meraki_mr_radio - New module
Deprecated Features
-------------------
purestorage.flasharray
~~~~~~~~~~~~~~~~~~~~~~
- purefa_sso - Deprecated in favor of M(purefa_admin). Will be removed in Collection 2.0
Porting Guide for v5.1.0
========================
Known Issues
------------
dellemc.openmanage
~~~~~~~~~~~~~~~~~~
- idrac_user - Issue(192043) The module may error out with the message ``unable to perform the import or export operation because there are pending attribute changes or a configuration job is in progress``. Wait for the job to complete and run the task again.
- ome_application_alerts_smtp - Issue(212310) - The module does not provide a proper error message if the destination_address is more than 255 characters.
- ome_application_alerts_syslog - Issue(215374) - The module does not provide a proper error message if the destination_address is more than 255 characters.
- ome_device_network_services - Issue(212681) - The module does not provide a proper error message if unsupported values are provided for the parameters- port_number, community_name, max_sessions, max_auth_retries, and idle_timeout.
- ome_device_power_settings - Issue(212679) - The module errors out with the following message if the value provided for the parameter ``power_cap`` is not within the supported range of 0 to 32767, ``Unable to complete the request because PowerCap does not exist or is not applicable for the resource URI.``
- ome_smart_fabric_uplink - Issue(186024) - The module does not allow the creation of multiple uplinks of the same name even though it is supported by OpenManage Enterprise Modular. If an uplink is created using the same name as an existing uplink, the existing uplink is modified.
Major Changes
-------------
containers.podman
~~~~~~~~~~~~~~~~~
- Add podman_tag module
- Add secrets driver and driver opts support
Removed Features
----------------
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- the "legacy" integration test setup has been removed; this does not affect end users and is only relevant to contributors (https://github.com/ansible-collections/community.hashi_vault/pull/191).
Deprecated Features
-------------------
cisco.nxos
~~~~~~~~~~
- Deprecated nxos_snmp_community module.
- Deprecated nxos_snmp_contact module.
- Deprecated nxos_snmp_host module.
- Deprecated nxos_snmp_location module.
- Deprecated nxos_snmp_traps module.
- Deprecated nxos_snmp_user module.
community.general
~~~~~~~~~~~~~~~~~
- module_helper module utils - deprecated the attribute ``ModuleHelper.VarDict`` (https://github.com/ansible-collections/community.general/pull/3801).
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- Support for Ansible 2.9 and ansible-base 2.10 is deprecated, and will be removed in the next major release (community.hashi_vault 3.0.0) next spring (https://github.com/ansible-community/community-topics/issues/50, https://github.com/ansible-collections/community.hashi_vault/issues/189).
- aws_iam_login auth method - the ``aws_iam_login`` method has been renamed to ``aws_iam``. The old name will be removed in collection version ``3.0.0``. Until then both names will work, and a warning will be displayed when using the old name (https://github.com/ansible-collections/community.hashi_vault/pull/193).
junipernetworks.junos
~~~~~~~~~~~~~~~~~~~~~
- 'router_id' options is deprecated from junos_ospf_interfaces, junos_ospfv2 and junos_ospfv3 resuorce module.
Porting Guide for v5.0.1
========================
Major Changes
-------------
- Raised python requirement of the ansible package from >=2.7 to >=3.8 to match ansible-core
Porting Guide for v5.0.0
========================
Added Collections
-----------------
- cisco.ise (version 1.2.1)
- cloud.common (version 2.1.0)
- community.ciscosmb (version 1.0.4)
- community.dns (version 2.0.3)
- infoblox.nios_modules (version 1.1.2)
- netapp.storagegrid (version 21.7.0)
Known Issues
------------
Ansible-core
~~~~~~~~~~~~
- ansible-test - Tab completion anywhere other than the end of the command with the new composite options will provide incorrect results. See https://github.com/kislyuk/argcomplete/issues/351 for additional details.
dellemc.openmanage
~~~~~~~~~~~~~~~~~~
- idrac_user - Issue(192043) Module may error out with the message ``unable to perform the import or export operation because there are pending attribute changes or a configuration job is in progress``. Wait for the job to complete and run the task again.
- ome_device_power_settings - Issue(212679) The ome_device_power_settings module errors out with the following message if the value provided for the parameter ``power_cap`` is not within the supported range of 0 to 32767, ``Unable to complete the request because PowerCap does not exist or is not applicable for the resource URI.``
- ome_smart_fabric_uplink - Issue(186024) ome_smart_fabric_uplink module does not allow the creation of multiple uplinks of the same name even though it is supported by OpenManage Enterprise Modular. If an uplink is created using the same name as an existing uplink, the existing uplink is modified.
- ome_smart_fabric_uplink - Issue(186024) ome_smart_fabric_uplink module does not allow the creation of multiple uplinks of the same name even though this is supported by OpenManage Enterprise Modular. If an uplink is created using the same name as an existing uplink, the existing uplink is modified.
purestorage.flashblade
~~~~~~~~~~~~~~~~~~~~~~
- purefb_lag - The mac_address field in the response is not populated. This will be fixed in a future FlashBlade update.
Breaking Changes
----------------
Ansible-core
~~~~~~~~~~~~
- Action, module, and group names in module_defaults must be static values. Their values can still be templates.
- Fully qualified 'ansible.legacy' plugin names are not included implicitly in action_groups.
- Unresolvable groups, action plugins, and modules in module_defaults are an error.
- ansible-test - Automatic installation of requirements for "cloud" test plugins no longer occurs. The affected test plugins are ``aws``, ``azure``, ``cs``, ``hcloud``, ``nios``, ``opennebula``, ``openshift`` and ``vcenter``. Collections should instead use one of the supported integration test requirements files, such as the ``tests/integration/requirements.txt`` file.
- ansible-test - The HTTP Tester is no longer available with the ``ansible-test shell`` command. Only the ``integration`` and ``windows-integration`` commands provide HTTP Tester.
- ansible-test - The ``--disable-httptester`` option is no longer available. The HTTP Tester is no longer optional for tests that specify it.
- ansible-test - The ``--httptester`` option is no longer available. To override the container used for HTTP Tester tests, set the ``ANSIBLE_HTTP_TEST_CONTAINER`` environment variable instead.
- ansible-test - Unit tests for ``modules`` and ``module_utils`` are now limited to importing only ``ansible.module_utils`` from the ``ansible`` module.
- conditionals - ``when`` conditionals no longer automatically parse string booleans such as ``"true"`` and ``"false"`` into actual booleans. Any non-empty string is now considered true. The ``CONDITIONAL_BARE_VARS`` configuration variable no longer has any effect.
- hostname - Drops any remaining support for Python 2.4 by using ``with open()`` to simplify exception handling code which leaked file handles in several spots
- hostname - On FreeBSD, the string ``temporarystub`` no longer gets written to the hostname file in the get methods (and in check_mode). As a result, the default hostname will now appear as ``''`` (empty string) instead of ``temporarystub`` for consistency with other strategies. This means the ``before`` result will be different.
- hostname - On OpenRC systems and Solaris, the ``before`` value will now be ``''`` (empty string) if the permanent hostname file does not exist, for consistency with other strategies.
- intersect, difference, symmetric_difference, union filters - the default behavior is now to be case-sensitive (https://github.com/ansible/ansible/issues/74255)
- unique filter - the default behavior is now to fail if Jinja2's filter fails and explicit ``case_sensitive=False`` as the Ansible's fallback is case-sensitive (https://github.com/ansible/ansible/pull/74256)
amazon.aws
~~~~~~~~~~
- ec2_instance - instance wait for state behaviour has changed. If plays require the old behavior of waiting for the instance monitoring status to become ``OK`` when launching a new instance, the action will need to specify ``state: started`` (https://github.com/ansible-collections/amazon.aws/pull/481).
- ec2_snapshot - support for waiting indefinitely has been dropped, new default is 10 minutes (https://github.com/ansible-collections/amazon.aws/pull/356).
- ec2_vol_info - return ``attachment_set`` is now a list of attachments with Multi-Attach support on disk. (https://github.com/ansible-collections/amazon.aws/pull/362).
- ec2_vpc_dhcp_option - The module has been refactored to use boto3. Keys and value types returned by the module are now consistent, which is a change from the previous behaviour. A ``purge_tags`` option has been added, which defaults to ``True``. (https://github.com/ansible-collections/amazon.aws/pull/252)
- ec2_vpc_dhcp_option_info - Now preserves case for tag keys in return value. (https://github.com/ansible-collections/amazon.aws/pull/252)
- module_utils.core - The boto3 switch has been removed from the region parameter (https://github.com/ansible-collections/amazon.aws/pull/287).
- module_utils/compat - vendored copy of ipaddress removed (https://github.com/ansible-collections/amazon.aws/pull/461).
- module_utils/core - updated the ``scrub_none_parameters`` function so that ``descend_into_lists`` is set to ``True`` by default (https://github.com/ansible-collections/amazon.aws/pull/297).
arista.eos
~~~~~~~~~~
- Arista released train 4.23.X and newer and along with it replaced and deprecated lots of commands. This PR adds support for syntax changes in release train 4.23 and after. Going forward the eos modules will not support eos sw version < 4.23.
community.aws
~~~~~~~~~~~~~
- ec2_instance - The module has been migrated to the ``amazon.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance``.
- ec2_instance_info - The module has been migrated to the ``amazon.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance_info``.
- ec2_vpc_endpoint - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint``.
- ec2_vpc_endpoint_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
- ec2_vpc_endpoint_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
- ec2_vpc_endpoint_service_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_service_info``.
- ec2_vpc_igw - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw``.
- ec2_vpc_igw_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_info``.
- ec2_vpc_igw_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_info``.
- ec2_vpc_nat_gateway - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway``.
- ec2_vpc_nat_gateway_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
- ec2_vpc_nat_gateway_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
- kms_info - key details are now returned in the ``kms_keys`` attribute rather than the ``keys`` attribute (https://github.com/ansible-collections/community.aws/pull/648).
community.crypto
~~~~~~~~~~~~~~~~
- Adjust ``dirName`` text parsing and to text converting code to conform to `Sections 2 and 3 of RFC 4514 <https://datatracker.ietf.org/doc/html/rfc4514.html>`_. This is similar to how `cryptography handles this <https://cryptography.io/en/latest/x509/reference/#cryptography.x509.Name.rfc4514_string>`_ (https://github.com/ansible-collections/community.crypto/pull/274).
- acme module utils - removing compatibility code (https://github.com/ansible-collections/community.crypto/pull/290).
- acme_* modules - removed vendored copy of the Python library ``ipaddress``. If you are using Python 2.x, please make sure to install the library (https://github.com/ansible-collections/community.crypto/pull/287).
- compatibility module_utils - removed vendored copy of the Python library ``ipaddress`` (https://github.com/ansible-collections/community.crypto/pull/287).
- crypto module utils - removing compatibility code (https://github.com/ansible-collections/community.crypto/pull/290).
- get_certificate, openssl_csr_info, x509_certificate_info - depending on the ``cryptography`` version used, the modules might not return the ASN.1 value for an extension as contained in the certificate respectively CSR, but a re-encoded version of it. This should usually be identical to the value contained in the source file, unless the value was malformed. For extensions not handled by C(cryptography) the value contained in the source file is always returned unaltered (https://github.com/ansible-collections/community.crypto/pull/318).
- module_utils - removed various PyOpenSSL support functions and default backend values that are not needed for the openssl_pkcs12 module (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_csr, openssl_csr_pipe, x509_crl - the ``subject`` respectively ``issuer`` fields no longer ignore empty values, but instead fail when encountering them (https://github.com/ansible-collections/community.crypto/pull/316).
- openssl_privatekey_info - by default consistency checks are not run; they need to be explicitly requested by passing ``check_consistency=true`` (https://github.com/ansible-collections/community.crypto/pull/309).
- x509_crl - for idempotency checks, the ``issuer`` order is ignored. If order is important, use the new ``issuer_ordered`` option (https://github.com/ansible-collections/community.crypto/pull/316).
community.dns
~~~~~~~~~~~~~
- All Hetzner modules and plugins which handle DNS records now work with unquoted TXT values by default. The old behavior can be obtained by setting ``txt_transformation=api`` (https://github.com/ansible-collections/community.dns/issues/48, https://github.com/ansible-collections/community.dns/pull/57, https://github.com/ansible-collections/community.dns/pull/60).
- Hosttech API creation - now requires a ``ModuleOptionProvider`` object instead of an ``AnsibleModule`` object. Alternatively an Ansible plugin instance can be passed (https://github.com/ansible-collections/community.dns/pull/37).
- The hetzner_dns_record_info and hosttech_dns_record_info modules have been renamed to hetzner_dns_record_set_info and hosttech_dns_record_set_info, respectively (https://github.com/ansible-collections/community.dns/pull/54).
- The hosttech_dns_record module has been renamed to hosttech_dns_record_set (https://github.com/ansible-collections/community.dns/pull/31).
- The internal bulk record updating helper (``bulk_apply_changes``) now also returns the records that were deleted, created or updated (https://github.com/ansible-collections/community.dns/pull/63).
- The internal record API no longer allows to manage comments explicitly (https://github.com/ansible-collections/community.dns/pull/63).
- When using the internal modules API, now a zone ID type and a provider information object must be passed (https://github.com/ansible-collections/community.dns/pull/27).
- hetzner_dns_record* modules - implement correct handling of default TTL. The value ``none`` is now accepted and returned in this case (https://github.com/ansible-collections/community.dns/pull/52, https://github.com/ansible-collections/community.dns/issues/50).
- hetzner_dns_record, hetzner_dns_record_set, hetzner_dns_record_sets - the default TTL is now 300 and no longer 3600, which equals the default in the web console (https://github.com/ansible-collections/community.dns/pull/43).
- hosttech_* module_utils - completely rewrite and refactor to support new JSON API and allow to re-use provider-independent module logic (https://github.com/ansible-collections/community.dns/pull/4).
- hosttech_dns_record_set - the option ``overwrite`` was replaced by a new option ``on_existing``. Specifying ``overwrite=true`` is equivalent to ``on_existing=replace`` (the new default). Specifying ``overwrite=false`` with ``state=present`` is equivalent to ``on_existing=keep_and_fail``, and specifying ``overwrite=false`` with ``state=absent`` is equivalent to ``on_existing=keep`` (https://github.com/ansible-collections/community.dns/pull/31).
community.docker
~~~~~~~~~~~~~~~~
- docker_compose - fixed ``timeout`` defaulting behavior so that ``stop_grace_period``, if defined in the compose file, will be used if `timeout`` is not specified (https://github.com/ansible-collections/community.docker/pull/163).
community.general
~~~~~~~~~~~~~~~~~
- archive - adding idempotency checks for changes to file names and content within the ``destination`` file (https://github.com/ansible-collections/community.general/pull/3075).
- lxd inventory plugin - when used with Python 2, the plugin now needs ``ipaddress`` installed `from pypi <https://pypi.org/project/ipaddress/>`_ (https://github.com/ansible-collections/community.general/pull/2441).
- scaleway_security_group_rule - when used with Python 2, the module now needs ``ipaddress`` installed `from pypi <https://pypi.org/project/ipaddress/>`_ (https://github.com/ansible-collections/community.general/pull/2441).
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- connection options - there is no longer a default value for the ``url`` option (the Vault address), so a value must be supplied (https://github.com/ansible-collections/community.hashi_vault/issues/83).
community.okd
~~~~~~~~~~~~~
- drop python 2 support (https://github.com/openshift/community.okd/pull/93).
community.routeros
~~~~~~~~~~~~~~~~~~
- api - due to a programming error, the module never failed on errors. This has now been fixed. If you are relying on the module not failing in case of idempotent commands (resulting in errors like ``failure: already have such address``), you need to adjust your roles/playbooks. We suggest to use ``failed_when`` to accept failure in specific circumstances, for example ``failed_when: "'failure: already have ' in result.msg[0]"`` (https://github.com/ansible-collections/community.routeros/pull/39).
- api - splitting commands no longer uses a naive split by whitespace, but a more RouterOS CLI compatible splitting algorithm (https://github.com/ansible-collections/community.routeros/pull/45).
- command - the module now always indicates that a change happens. If this is not correct, please use ``changed_when`` to determine the correct changed status for a task (https://github.com/ansible-collections/community.routeros/pull/50).
community.zabbix
~~~~~~~~~~~~~~~~
- all roles now reference other roles and modules via their fully qualified collection names, which makes Ansible 2.10 minimum supported version for roles (See https://github.com/ansible-collections/community.zabbix/pull/477).
kubernetes.core
~~~~~~~~~~~~~~~
- Drop python 2 support (https://github.com/ansible-collections/kubernetes.core/pull/86).
- helm_plugin - remove unused ``release_namespace`` parameter (https://github.com/ansible-collections/kubernetes.core/pull/85).
- helm_plugin_info - remove unused ``release_namespace`` parameter (https://github.com/ansible-collections/kubernetes.core/pull/85).
- k8s_cluster_info - returned apis as list to avoid being overwritten in case of multiple version (https://github.com/ansible-collections/kubernetes.core/pull/41).
- k8s_facts - remove the deprecated alias from k8s_facts to k8s_info (https://github.com/ansible-collections/kubernetes.core/pull/125).
netapp.storagegrid
~~~~~~~~~~~~~~~~~~
- This version introduces a breaking change.
All modules have been renamed from ``nac_sg_*`` to ``na_sg_*``.
Playbooks and Roles must be updated to match.
Major Changes
-------------
Ansible-core
~~~~~~~~~~~~
- Python Controller Requirement - Python 3.8 or newer is required for the control node (the machine that runs Ansible) (https://github.com/ansible/ansible/pull/74013)
- ansible-test - All "cloud" plugins which use containers can now be used with all POSIX and Windows hosts. Previously the plugins did not work with Windows at all, and support for hosts created with the ``--remote`` option was inconsistent.
- ansible-test - Collections can now specify controller and target specific integration test requirements and constraints. If provided, they take precedence over the previously available requirements and constraints files.
- ansible-test - Integration tests run with the ``integration`` command can now be executed on two separate hosts instead of always running on the controller. The target host can be one provided by ``ansible-test`` or by the user, as long as it is accessible using SSH.
- ansible-test - Most container features are now supported under Podman. Previously a symbolic link for ``docker`` pointing to ``podman`` was required.
- ansible-test - New ``--controller`` and ``--target`` / ``--target-python`` options have been added to allow more control over test environments.
- ansible-test - Python 3.8 - 3.10 are now required to run ``ansible-test``, thus matching the Ansible controller Python requirements. Older Python versions (2.6 - 2.7 and 3.5 - 3.10) can still be the target for relevant tests.
- ansible-test - SSH port forwarding and redirection is now used exclusively to make container ports available on non-container hosts. When testing on POSIX systems this requires SSH login as root. Previously SSH port forwarding was combined with firewall rules or other port redirection methods, with some platforms being unsupported.
- ansible-test - Sanity tests always run in isolated Python virtual environments specific to the requirements of each test. The environments are cached.
- ansible-test - Sanity tests are now separated into two categories, controller and target. All tests except ``import`` and ``compile`` are controller tests. The controller tests always run using the same Python version used to run ``ansible-test``. The target tests use the Python version(s) specified by the user, or all available Python versions.
- ansible-test - Sanity tests now use fully pinned requirements that are independent of each other and other test types.
- ansible-test - Tests run with the ``centos6`` and ``default`` test containers now use a PyPI proxy container to access PyPI when Python 2.6 is used. This allows tests running under Python 2.6 to continue functioning even though PyPI is discontinuing support for non-SNI capable clients.
- ansible-test - The ``future-import-boilerplate`` and ``metaclass-boilerplate`` sanity tests are limited to remote-only code. Additionally, they are skipped for collections which declare no support for Python 2.x.
- ansible-test - The ``import`` and ``compile`` sanity tests limit remote-only Python version checks to remote-only code.
- ansible-test - Unit tests for controller-only code now require Python 3.8 or later.
- ansible-test - Version neutral sanity tests now require Python 3.8 or later.
- junit callback - The ``junit_xml`` and ``ordereddict`` Python modules are no longer required to use the ``junit`` callback plugin.
amazon.aws
~~~~~~~~~~
- amazon.aws collection - Due to the AWS SDKs announcing the end of support for Python less than 3.6 (https://boto3.amazonaws.com/v1/documentation/api/1.17.64/guide/migrationpy3.html) this collection now requires Python 3.6+ (https://github.com/ansible-collections/amazon.aws/pull/298).
- amazon.aws collection - The amazon.aws collection has dropped support for ``botocore<1.18.0`` and ``boto3<1.15.0``. Most modules will continue to work with older versions of the AWS SDK, however compatability with older versions of the SDK is not guaranteed and will not be tested. When using older versions of the SDK a warning will be emitted by Ansible (https://github.com/ansible-collections/amazon.aws/pull/502).
- ec2_instance - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance``.
- ec2_instance_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_instance_info``.
- ec2_vpc_endpoint - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint``.
- ec2_vpc_endpoint_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
- ec2_vpc_endpoint_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_info``.
- ec2_vpc_endpoint_service_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_endpoint_service_info``.
- ec2_vpc_igw - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw``.
- ec2_vpc_igw_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_facts``.
- ec2_vpc_igw_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_igw_info``.
- ec2_vpc_nat_gateway - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway``.
- ec2_vpc_nat_gateway_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
- ec2_vpc_nat_gateway_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_nat_gateway_info``.
- ec2_vpc_route_table - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table``.
- ec2_vpc_route_table_facts - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table_facts``.
- ec2_vpc_route_table_info - The module has been migrated from the ``community.aws`` collection. Playbooks using the Fully Qualified Collection Name for this module should be updated to use ``amazon.aws.ec2_vpc_route_table_info``.
cisco.ise
~~~~~~~~~
- Adds ``ise_uses_api_gateway`` to module options.
- Adds a 'aws_deployment' role that allows the deployment of an arbitrary large ISE cluster to AWS.
- Adds ise_responses to return values of info modules.
- Adds ise_update_response to return values of non-info modules.
- Fixes inner logic of modules that have no get by name and have not working filter.
- Renamed module device_administration_authorization_exception_rules to device_administration_local_exception_rules.
- Renamed module device_administration_authorization_global_exception_rules to device_administration_global_exception_rules.
- Renamed module network_access_authorization_exception_rules to network_access_local_exception_rules.
- Renamed module network_access_authorization_global_exception_rules to network_access_global_exception_rules.
- Updates options required for modules.
- Updates sdk parameters for previous modules
- device_administration_authorization_exception_rules - removed module.
- device_administration_authorization_exception_rules_info - removed module.
- device_administration_authorization_global_exception_rules - removed module.
- device_administration_authorization_global_exception_rules_info - removed module.
- guest_user_reinstante - removed module.
- import_trust_cert - removed module.
- network_access_authorization_exception_rules - removed module.
- network_access_authorization_exception_rules_info - removed module.
- network_access_authorization_global_exception_rules - removed module.
- network_access_authorization_global_exception_rules_info - removed module.
- personas_check_standalone - Adds module for the deployment of personas to existing nodes in an ISE cluster.
- personas_export_certs - Adds module for the deployment of personas to existing nodes in an ISE cluster.
- personas_promote_primary - Adds module for the deployment of personas to existing nodes in an ISE cluster.
- personas_update_roles - Adds module for the deployment of personas to existing nodes in an ISE cluster.
- service_info - removed module.
- system_certificate_export - removed module.
- telemetry_info_info - removed module.
cloud.common
~~~~~~~~~~~~
- turbo - enable turbo mode for lookup plugins
cloudscale_ch.cloud
~~~~~~~~~~~~~~~~~~~
- Add custom_image module
community.aws
~~~~~~~~~~~~~
- community.aws collection - The community.aws collection has dropped support for ``botocore<1.18.0`` and ``boto3<1.15.0`` (https://github.com/ansible-collections/community.aws/pull/711). Most modules will continue to work with older versions of the AWS SDK, however compatability with older versions of the SDK is not guaranteed and will not be tested. When using older versions of the SDK a warning will be emitted by Ansible (https://github.com/ansible-collections/amazon.aws/pull/442).
community.ciscosmb
~~~~~~~~~~~~~~~~~~
- Python 2.6, 2.7, 3.5 is required
- add CBS350 support
- add antsibull-changelog support
- add ciscosmb_command
- added facts subset "interfaces"
- ciscosmb_facts with default subset and unit tests
- interface name canonicalization
- transform collection qaxi.ciscosmb to community.ciscosmb
- transform community.ciscosmb.ciscosmb_command to community.ciscosmb.command
- transform community.ciscosmb.ciscosmb_facts to community.ciscosmb.facts
- unit tests for CBS350
community.dns
~~~~~~~~~~~~~
- hosttech_* modules - support the new JSON API at https://api.ns1.hosttech.eu/api/documentation/ (https://github.com/ansible-collections/community.dns/pull/4).
community.general
~~~~~~~~~~~~~~~~~
- bitbucket_* modules - ``client_id`` is no longer marked as ``no_log=true``. If you relied on its value not showing up in logs and output, please mark the whole tasks with ``no_log: true`` (https://github.com/ansible-collections/community.general/pull/2045).
community.kubernetes
~~~~~~~~~~~~~~~~~~~~
- redirect everything from ``community.kubernetes`` to ``kubernetes.core`` (https://github.com/ansible-collections/community.kubernetes/pull/425).
community.okd
~~~~~~~~~~~~~
- update to use kubernetes.core 2.0 (https://github.com/openshift/community.okd/pull/93).
community.postgresql
~~~~~~~~~~~~~~~~~~~~
- postgresql_query - the default value of the ``as_single_query`` option will be changed to ``yes`` in community.postgresql 2.0.0 (https://github.com/ansible-collections/community.postgresql/issues/85).
community.vmware
~~~~~~~~~~~~~~~~
- vmware_object_custom_attributes_info - added a new module to gather custom attributes of an object (https://github.com/ansible-collections/community.vmware/pull/851).
containers.podman
~~~~~~~~~~~~~~~~~
- Add systemd generation for pods
- Generate systemd service files for containers
dellemc.openmanage
~~~~~~~~~~~~~~~~~~
- idrac_server_config_profile - Added support for exporting and importing Server Configuration Profile through HTTP/HTTPS share.
- ome_device_group - Added support for adding devices to a group using the IP addresses of the devices and group ID.
- ome_firmware - Added option to stage the firmware update and support for selecting components and devices for baseline-based firmware update.
- ome_firmware_baseline - Module supports check mode, and allows the modification and deletion of firmware baselines.
- ome_firmware_catalog - Module supports check mode, and allows the modification and deletion of firmware catalogs.
fortinet.fortios
~~~~~~~~~~~~~~~~
- Add real-world use cases in the example section for some configuration modules.
- Collect the current configurations of the modules and convert them into playbooks.
- Improve ``fortios_configuration_fact`` to use multiple selectors concurrently.
- New module fortios_monitor_fact.
- Support FortiOS 7.0.1.
- Support Fortios 7.0.
- Support Log APIs.
- Support ``check_mode`` in all cofigurationAPI-based modules.
- Support filtering for fact gathering modules ``fortios_configuration_fact`` and ``fortios_monitor_fact``.
- Support member operation (delete/add extra members) on an object that has a list of members in it.
- Support moving policy in ``firewall_central_snat_map``.
- Support selectors feature in ``fortios_monitor_fact`` and ``fortios_log_fact``.
- Unify schemas for monitor API.
gluster.gluster
~~~~~~~~~~~~~~~
- enable client.ssl,server.ssl before starting the gluster volume (https://github.com/gluster/gluster-ansible-collection/pull/19)
hetzner.hcloud
~~~~~~~~~~~~~~
- Introduction of placement groups
kubernetes.core
~~~~~~~~~~~~~~~
- k8s - deprecate merge_type=json. The JSON patch functionality has never worked (https://github.com/ansible-collections/kubernetes.core/pull/99).
- k8s_json_patch - split JSON patch functionality out into a separate module (https://github.com/ansible-collections/kubernetes.core/pull/99).
- replaces the openshift client with the official kubernetes client (https://github.com/ansible-collections/kubernetes.core/issues/34).
netapp.cloudmanager
~~~~~~~~~~~~~~~~~~~
- Adding stage environment to all modules in cloudmanager
netbox.netbox
~~~~~~~~~~~~~
- packages is now a required Python package and gets installed via Ansible 2.10+.
openvswitch.openvswitch
~~~~~~~~~~~~~~~~~~~~~~~
- By mistake we tagged the repo to 2.0.0 and as it wasn't intended and cannot be reverted we're releasing 2.0.1 to make the community aware of the major version update.
ovirt.ovirt
~~~~~~~~~~~
- remove_stale_lun - Add role for removing stale LUN (https://bugzilla.redhat.com/1966873).
Removed Features
----------------
Ansible-core
~~~~~~~~~~~~
- The built-in module_util ``ansible.module_utils.common.removed`` was previously deprecated and has been removed.
- connections, removed password check stubs that had been moved to become plugins.
- task, inline parameters being auto coerced into variables has been removed.
ansible.windows
~~~~~~~~~~~~~~~
- win_reboot - Removed ``shutdown_timeout`` and ``shutdown_timeout_sec`` which has not done anything since Ansible 2.5.
community.crypto
~~~~~~~~~~~~~~~~
- acme_* modules - the ``acme_directory`` option is now required (https://github.com/ansible-collections/community.crypto/pull/290).
- acme_* modules - the ``acme_version`` option is now required (https://github.com/ansible-collections/community.crypto/pull/290).
- acme_account_facts - the deprecated redirect has been removed. Use community.crypto.acme_account_info instead (https://github.com/ansible-collections/community.crypto/pull/290).
- acme_account_info - ``retrieve_orders=url_list`` no longer returns the return value ``orders``. Use the ``order_uris`` return value instead (https://github.com/ansible-collections/community.crypto/pull/290).
- crypto.info module utils - the deprecated redirect has been removed. Use ``crypto.pem`` instead (https://github.com/ansible-collections/community.crypto/pull/290).
- get_certificate - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_certificate - the deprecated redirect has been removed. Use community.crypto.x509_certificate instead (https://github.com/ansible-collections/community.crypto/pull/290).
- openssl_certificate_info - the deprecated redirect has been removed. Use community.crypto.x509_certificate_info instead (https://github.com/ansible-collections/community.crypto/pull/290).
- openssl_csr - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_csr and openssl_csr_pipe - ``version`` now only accepts the (default) value 1 (https://github.com/ansible-collections/community.crypto/pull/290).
- openssl_csr_info - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_csr_pipe - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_privatekey - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_privatekey_info - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_privatekey_pipe - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_publickey - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_publickey_info - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_signature - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- openssl_signature_info - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- x509_certificate - remove ``assertonly`` provider (https://github.com/ansible-collections/community.crypto/pull/289).
- x509_certificate - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- x509_certificate_info - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
- x509_certificate_pipe - removed the ``pyopenssl`` backend (https://github.com/ansible-collections/community.crypto/pull/273).
community.docker
~~~~~~~~~~~~~~~~
- docker_container - the default value of ``container_default_behavior`` changed to ``no_defaults`` (https://github.com/ansible-collections/community.docker/pull/210).
- docker_container - the default value of ``network_mode`` is now the name of the first network specified in ``networks`` if such are specified and ``networks_cli_compatible=true`` (https://github.com/ansible-collections/community.docker/pull/210).
- docker_container - the special value ``all`` can no longer be used in ``published_ports`` next to other values. Please use ``publish_all_ports=true`` instead (https://github.com/ansible-collections/community.docker/pull/210).
- docker_login - removed the ``email`` option (https://github.com/ansible-collections/community.docker/pull/210).
community.general
~~~~~~~~~~~~~~~~~
- All inventory and vault scripts contained in community.general were moved to the `contrib-scripts GitHub repository <https://github.com/ansible-community/contrib-scripts>`_ (https://github.com/ansible-collections/community.general/pull/2696).
- ModuleHelper module utils - remove fallback when value could not be determined for a parameter (https://github.com/ansible-collections/community.general/pull/3461).
- Removed deprecated netapp module utils and doc fragments (https://github.com/ansible-collections/community.general/pull/3197).
- The nios, nios_next_ip, nios_next_network lookup plugins, the nios documentation fragment, and the nios_host_record, nios_ptr_record, nios_mx_record, nios_fixed_address, nios_zone, nios_member, nios_a_record, nios_aaaa_record, nios_network, nios_dns_view, nios_txt_record, nios_naptr_record, nios_srv_record, nios_cname_record, nios_nsgroup, and nios_network_view module have been removed from community.general 4.0.0 and were replaced by redirects to the `infoblox.nios_modules <https://galaxy.ansible.com/infoblox/nios_modules>`_ collection. Please install the ``infoblox.nios_modules`` collection to continue using these plugins and modules, and update your FQCNs (https://github.com/ansible-collections/community.general/pull/3592).
- The vendored copy of ``ipaddress`` has been removed. Please use ``ipaddress`` from the Python 3 standard library, or `from pypi <https://pypi.org/project/ipaddress/>`_. (https://github.com/ansible-collections/community.general/pull/2441).
- cpanm - removed the deprecated ``system_lib`` option. Use Ansible's privilege escalation mechanism instead; the option basically used ``sudo`` (https://github.com/ansible-collections/community.general/pull/3461).
- grove - removed the deprecated alias ``message`` of the ``message_content`` option (https://github.com/ansible-collections/community.general/pull/3461).
- proxmox - default value of ``proxmox_default_behavior`` changed to ``no_defaults`` (https://github.com/ansible-collections/community.general/pull/3461).
- proxmox_kvm - default value of ``proxmox_default_behavior`` changed to ``no_defaults`` (https://github.com/ansible-collections/community.general/pull/3461).
- runit - removed the deprecated ``dist`` option which was not used by the module (https://github.com/ansible-collections/community.general/pull/3461).
- telegram - removed the deprecated ``msg``, ``msg_format`` and ``chat_id`` options (https://github.com/ansible-collections/community.general/pull/3461).
- xfconf - the default value of ``disable_facts`` changed to ``true``, and the value ``false`` is no longer allowed. Register the module results instead (https://github.com/ansible-collections/community.general/pull/3461).
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- drop support for Python 2 and Python 3.5 (https://github.com/ansible-collections/community.hashi_vault/issues/81).
- support for the following deprecated environment variables has been removed: ``VAULT_AUTH_METHOD``, ``VAULT_TOKEN_PATH``, ``VAULT_TOKEN_FILE``, ``VAULT_ROLE_ID``, ``VAULT_SECRET_ID`` (https://github.com/ansible-collections/community.hashi_vault/pull/173).
Deprecated Features
-------------------
Ansible-core
~~~~~~~~~~~~
- ansible-test - The ``--docker-no-pull`` option is deprecated and has no effect.
- ansible-test - The ``--no-pip-check`` option is deprecated and has no effect.
- include action is deprecated in favor of include_tasks, import_tasks and import_playbook.
- module_utils' FileLock is scheduled to be removed, it is not used due to its unreliable nature.
amazon.aws
~~~~~~~~~~
- ec2 - the boto based ``ec2`` module has been deprecated in favour of the boto3 based ``ec2_instance`` module. The ``ec2`` module will be removed in release 4.0.0 (https://github.com/ansible-collections/amazon.aws/pull/424).
- ec2_classic_lb - setting of the ``ec2_elb`` fact has been deprecated and will be removed in release 4.0.0 of the collection. The module now returns ``elb`` which can be accessed using the register keyword (https://github.com/ansible-collections/amazon.aws/pull/552).
- ec2_vpc_dhcp_option - The ``new_config`` return key has been deprecated and will be removed in a future release. It will be replaced by ``dhcp_config``. Both values are returned in the interim. (https://github.com/ansible-collections/amazon.aws/pull/252)
ansible.netcommon
~~~~~~~~~~~~~~~~~
- network_cli - The paramiko_ssh setting ``look_for_keys`` was set automatically based on the values of the ``password`` and ``private_key_file`` options passed to network_cli. This option can now be set explicitly, and the automatic setting of ``look_for_keys`` will be removed after 2024-01-01 (https://github.com/ansible-collections/ansible.netcommon/pull/271).
ansible.windows
~~~~~~~~~~~~~~~
- win_reboot - Unreachable hosts can be ignored with ``ignore_errors: True``, this ability will be removed in a future version. Use ``ignore_unreachable: True`` to ignore unreachable hosts instead. - https://github.com/ansible-collections/ansible.windows/issues/62
- win_updates - Deprecated the ``filtered_reason`` return value for each filtered up in favour of ``filtered_reasons``. This has been done to show all the reasons why an update was filtered and not just the first reason.
- win_updates - Deprecated the ``use_scheduled_task`` option as it is no longer used.
- win_updates - Deprecated the ``whitelist`` and ``blacklist`` options in favour of ``accept_list`` and ``reject_list`` respectively to conform to the new standards used in Ansible for these types of options.
arista.eos
~~~~~~~~~~
- Remove testing with provider for ansible-test integration jobs. This helps prepare us to move to network-ee integration tests.
cisco.ios
~~~~~~~~~
- Deprecated ios_bgp in favor of ios_bgp_global and ios_bgp_address_family.
- Deprecated ios_ntp modules.
- Remove testing with provider for ansible-test integration jobs. This helps prepare us to move to network-ee integration tests.
cisco.iosxr
~~~~~~~~~~~
- The iosxr_logging module has been deprecated in favor of the new iosxr_logging_global resource module and will be removed in a release after '2023-08-01'.
cisco.nxos
~~~~~~~~~~
- Deprecated `nxos_ntp`, `nxos_ntp_options`, `nxos_ntp_auth` modules.
- The nxos_logging module has been deprecated in favor of the new nxos_logging_global resource module and will be removed in a release after '2023-08-01'.
community.aws
~~~~~~~~~~~~~
- dynamodb_table - DynamoDB does not support specifying non-key-attributes when creating an ``ALL`` index. Passing ``includes`` for such indexes is currently ignored but will result in failures after version 3.0.0 (https://github.com/ansible-collections/community.aws/pull/726).
- dynamodb_table - DynamoDB does not support updating the primary indexes on a table. Attempts to make such changes are currently ignored but will result in failures after version 3.0.0 (https://github.com/ansible-collections/community.aws/pull/726).
- ec2_elb - the ``ec2_elb`` module has been removed and redirected to the ``elb_instance`` module which functions identically. The original ``ec2_elb`` name is now deprecated and will be removed in release 3.0.0 (https://github.com/ansible-collections/community.aws/pull/586).
- ec2_elb_info - the boto based ``ec2_elb_info`` module has been deprecated in favour of the boto3 based ``elb_classic_lb_info`` module. The ``ec2_elb_info`` module will be removed in release 3.0.0 (https://github.com/ansible-collections/community.aws/pull/586).
- elb_classic_lb - the ``elb_classic_lb`` module has been removed and redirected to the ``amazon.aws.ec2_elb_lb`` module which functions identically.
- elb_instance - setting of the ``ec2_elb`` fact has been deprecated and will be removed in release 4.0.0 of the collection. See the module documentation for an alternative example using the register keyword (https://github.com/ansible-collections/community.aws/pull/773).
- iam - the boto based ``iam`` module has been deprecated in favour of the boto3 based ``iam_user``, ``iam_group`` and ``iam_role`` modules. The ``iam`` module will be removed in release 3.0.0 (https://github.com/ansible-collections/community.aws/pull/664).
- iam_cert - the iam_cert module has been renamed to iam_server_certificate for consistency with the companion iam_server_certificate_info module. The usage of the module has not changed. The iam_cert alias will be removed in version 4.0.0 (https://github.com/ansible-collections/community.aws/pull/728).
- iam_server_certificate - Passing file names to the ``cert``, ``chain_cert`` and ``key`` parameters has been deprecated. We recommend using a lookup plugin to read the files instead, see the documentation for an example (https://github.com/ansible-collections/community.aws/pull/735).
- iam_server_certificate - the default value for the ``dup_ok`` parameter is currently ``false``, in version 4.0.0 this will be updated to ``true``. To preserve the current behaviour explicitly set the ``dup_ok`` parameter to ``false`` (https://github.com/ansible-collections/community.aws/pull/737).
- rds - the boto based ``rds`` module has been deprecated in favour of the boto3 based ``rds_instance`` module. The ``rds`` module will be removed in release 3.0.0 (https://github.com/ansible-collections/community.aws/pull/663).
- rds_snapshot - the rds_snapshot module has been renamed to rds_instance_snapshot. The usage of the module has not changed. The rds_snapshot alias will be removed in version 4.0.0 (https://github.com/ansible-collections/community.aws/pull/783).
- script_inventory_ec2 - The ec2.py inventory script is being moved to a new repository. The script can now be downloaded from https://github.com/ansible-community/contrib-scripts/blob/main/inventory/ec2.py and will be removed from this collection in the 3.0 release. We recommend migrating from the script to the `amazon.aws.ec2` inventory plugin.
community.azure
~~~~~~~~~~~~~~~
- All community.azure.azure_rm_<resource>_facts modules are deprecated. Use azure.azcollection.azure_rm_<resource>_info modules instead (https://github.com/ansible-collections/community.azure/pull/24).
- All community.azure.azure_rm_<resource>_info modules are deprecated. Use azure.azcollection.azure_rm_<resource>_info modules instead (https://github.com/ansible-collections/community.azure/pull/24).
- community.azure.azure_rm_managed_disk and community.azure.azure_rm_manageddisk are deprecated. Use azure.azcollection.azure_rm_manageddisk instead (https://github.com/ansible-collections/community.azure/pull/24).
- community.azure.azure_rm_virtualmachine_extension and community.azure.azure_rm_virtualmachineextension are deprecated. Use azure.azcollection.azure_rm_virtualmachineextension instead (https://github.com/ansible-collections/community.azure/pull/24).
- community.azure.azure_rm_virtualmachine_scaleset and community.azure.azure_rm_virtualmachinescaleset are deprecated. Use azure.azcollection.azure_rm_virtualmachinescaleset instead (https://github.com/ansible-collections/community.azure/pull/24).
community.crypto
~~~~~~~~~~~~~~~~
- acme_* modules - ACME version 1 is now deprecated and support for it will be removed in community.crypto 2.0.0 (https://github.com/ansible-collections/community.crypto/pull/288).
community.dns
~~~~~~~~~~~~~
- The hosttech_dns_records module has been renamed to hosttech_dns_record_sets. The old name will stop working in community.dns 3.0.0 (https://github.com/ansible-collections/community.dns/pull/31).
community.docker
~~~~~~~~~~~~~~~~
- docker_* modules and plugins, except ``docker_swarm`` connection plugin and ``docker_compose`` and ``docker_stack*` modules - the current default ``localhost`` for ``tls_hostname`` is deprecated. In community.docker 2.0.0 it will be computed from ``docker_host`` instead (https://github.com/ansible-collections/community.docker/pull/134).
- docker_container - the new ``command_handling``'s default value, ``compatibility``, is deprecated and will change to ``correct`` in community.docker 3.0.0. A deprecation warning is emitted by the module in cases where the behavior will change. Please note that ansible-core will output a deprecation warning only once, so if it is shown for an earlier task, there could be more tasks with this warning where it is not shown (https://github.com/ansible-collections/community.docker/pull/186).
- docker_container - using the special value ``all`` in ``published_ports`` has been deprecated. Use ``publish_all_ports=true`` instead (https://github.com/ansible-collections/community.docker/pull/210).
community.general
~~~~~~~~~~~~~~~~~
- Support for Ansible 2.9 and ansible-base 2.10 is deprecated, and will be removed in the next major release (community.general 5.0.0) next spring. While most content will probably still work with ansible-base 2.10, we will remove symbolic links for modules and action plugins, which will make it impossible to use them with Ansible 2.9 anymore. Please use community.general 4.x.y with Ansible 2.9 and ansible-base 2.10, as these releases will continue to support Ansible 2.9 and ansible-base 2.10 even after they are End of Life (https://github.com/ansible-community/community-topics/issues/50, https://github.com/ansible-collections/community.general/pull/3723).
- ali_instance_info - marked removal version of deprecated parameters ``availability_zone`` and ``instance_names`` (https://github.com/ansible-collections/community.general/issues/2429).
- bitbucket_* modules - ``username`` options have been deprecated in favor of ``workspace`` and will be removed in community.general 6.0.0 (https://github.com/ansible-collections/community.general/pull/2045).
- dnsimple - python-dnsimple < 2.0.0 is deprecated and support for it will be removed in community.general 5.0.0 (https://github.com/ansible-collections/community.general/pull/2946#discussion_r667624693).
- gitlab_group_members - setting ``gitlab_group`` to ``name`` or ``path`` is deprecated. Use ``full_path`` instead (https://github.com/ansible-collections/community.general/pull/3451).
- keycloak_authentication - the return value ``flow`` is now deprecated and will be removed in community.general 6.0.0; use ``end_state`` instead (https://github.com/ansible-collections/community.general/pull/3280).
- keycloak_group - the return value ``group`` is now deprecated and will be removed in community.general 6.0.0; use ``end_state`` instead (https://github.com/ansible-collections/community.general/pull/3280).
- linode - parameter ``backupsenabled`` is deprecated and will be removed in community.general 5.0.0 (https://github.com/ansible-collections/community.general/pull/2410).
- lxd_container - the current default value ``true`` of ``ignore_volatile_options`` is deprecated and will change to ``false`` in community.general 6.0.0 (https://github.com/ansible-collections/community.general/pull/3429).
- serverless - deprecating parameter ``functions`` because it was not used in the code (https://github.com/ansible-collections/community.general/pull/2845).
- xfconf - deprecate the ``get`` state. The new module ``xfconf_info`` should be used instead (https://github.com/ansible-collections/community.general/pull/3049).
community.grafana
~~~~~~~~~~~~~~~~~
- grafana_dashboard lookup - Providing a mangled version of the API key is no longer preferred.
community.hashi_vault
~~~~~~~~~~~~~~~~~~~~~
- hashi_vault collection - support for Python 2 will be dropped in version ``2.0.0`` of ``community.hashi_vault`` (https://github.com/ansible-collections/community.hashi_vault/issues/81).
- hashi_vault collection - support for Python 3.5 will be dropped in version ``2.0.0`` of ``community.hashi_vault`` (https://github.com/ansible-collections/community.hashi_vault/issues/81).
- lookup hashi_vault - the ``[lookup_hashi_vault]`` section in the ``ansible.cfg`` file is deprecated and will be removed in collection version ``3.0.0``. Instead, the section ``[hashi_vault_collection]`` can be used, which will apply to all plugins in the collection going forward (https://github.com/ansible-collections/community.hashi_vault/pull/144).
community.kubernetes
~~~~~~~~~~~~~~~~~~~~
- The ``community.kubernetes`` collection is being renamed to ``kubernetes.core``. All content in the collection has been replaced by deprecated redirects to ``kubernetes.core``. If you are using FQCNs starting with ``community.kubernetes``, please update them to ``kubernetes.core`` (https://github.com/ansible-collections/community.kubernetes/pull/439).
community.vmware
~~~~~~~~~~~~~~~~
- vmware_guest_vnc - Sphere 7.0 removed the built-in VNC server (https://docs.vmware.com/en/VMware-vSphere/7.0/rn/vsphere-esxi-vcenter-server-70-release-notes.html#productsupport).
inspur.sm
~~~~~~~~~
- add_ad_group - This feature will be removed in inspur.sm.add_ad_group 3.0.0. replaced with inspur.sm.ad_group.
- add_ldap_group - This feature will be removed in inspur.sm.add_ldap_group 3.0.0. replaced with inspur.sm.ldap_group.
- add_user - This feature will be removed in inspur.sm.add_user 3.0.0. replaced with inspur.sm.user.
- add_user_group - This feature will be removed in inspur.sm.add_user_group 3.0.0. replaced with inspur.sm.user_group.
- del_ad_group - This feature will be removed in inspur.sm.del_ad_group 3.0.0. replaced with inspur.sm.ad_group.
- del_ldap_group - This feature will be removed in inspur.sm.del_ldap_group 3.0.0. replaced with inspur.sm.ldap_group.
- del_user - This feature will be removed in inspur.sm.del_user 3.0.0. replaced with inspur.sm.user.
- del_user_group - This feature will be removed in inspur.sm.del_user_group 3.0.0. replaced with inspur.sm.user_group.
- edit_ad_group - This feature will be removed in inspur.sm.edit_ad_group 3.0.0. replaced with inspur.sm.ad_group.
- edit_ldap_group - This feature will be removed in inspur.sm.edit_ldap_group 3.0.0. replaced with inspur.sm.ldap_group.
- edit_user - This feature will be removed in inspur.sm.edit_user 3.0.0. replaced with inspur.sm.user.
- edit_user_group - This feature will be removed in inspur.sm.edit_user_group 3.0.0. replaced with inspur.sm.user_group.
junipernetworks.junos
~~~~~~~~~~~~~~~~~~~~~
- Deprecated router_id from ospfv2 resource module.
- Deprecated router_id from ospfv3 resource module.
- The junos_logging module has been deprecated in favor of the new junos_logging_global resource module and will be removed in a release after '2023-08-01'.
vyos.vyos
~~~~~~~~~
- The vyos_logging module has been deprecated in favor of the new vyos_logging_global resource module and will be removed in a release after "2023-08-01".
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,922 |
Docs: scenario guides: Replace yes/no booleans with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/scenario_guides/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 16 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/scenario_guides/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78922
|
https://github.com/ansible/ansible/pull/78934
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
|
5b333c9665182e20c2dfbed64006ced12e897ccb
| 2022-09-29T14:10:23Z |
python
| 2022-10-03T20:40:12Z |
docs/docsite/rst/scenario_guides/guide_aci.rst
|
.. _aci_guide:
Cisco ACI Guide
===============
.. _aci_guide_intro:
What is Cisco ACI ?
-------------------
Application Centric Infrastructure (ACI)
........................................
The Cisco Application Centric Infrastructure (ACI) allows application requirements to define the network. This architecture simplifies, optimizes, and accelerates the entire application deployment life cycle.
Application Policy Infrastructure Controller (APIC)
...................................................
The APIC manages the scalable ACI multi-tenant fabric. The APIC provides a unified point of automation and management, policy programming, application deployment, and health monitoring for the fabric. The APIC, which is implemented as a replicated synchronized clustered controller, optimizes performance, supports any application anywhere, and provides unified operation of the physical and virtual infrastructure.
The APIC enables network administrators to easily define the optimal network for applications. Data center operators can clearly see how applications consume network resources, easily isolate and troubleshoot application and infrastructure problems, and monitor and profile resource usage patterns.
The Cisco Application Policy Infrastructure Controller (APIC) API enables applications to directly connect with a secure, shared, high-performance resource pool that includes network, compute, and storage capabilities.
ACI Fabric
..........
The Cisco Application Centric Infrastructure (ACI) Fabric includes Cisco Nexus 9000 Series switches with the APIC to run in the leaf/spine ACI fabric mode. These switches form a "fat-tree" network by connecting each leaf node to each spine node; all other devices connect to the leaf nodes. The APIC manages the ACI fabric.
The ACI fabric provides consistent low-latency forwarding across high-bandwidth links (40 Gbps, with a 100-Gbps future capability). Traffic with the source and destination on the same leaf switch is handled locally, and all other traffic travels from the ingress leaf to the egress leaf through a spine switch. Although this architecture appears as two hops from a physical perspective, it is actually a single Layer 3 hop because the fabric operates as a single Layer 3 switch.
The ACI fabric object-oriented operating system (OS) runs on each Cisco Nexus 9000 Series node. It enables programming of objects for each configurable element of the system. The ACI fabric OS renders policies from the APIC into a concrete model that runs in the physical infrastructure. The concrete model is analogous to compiled software; it is the form of the model that the switch operating system can execute.
All the switch nodes contain a complete copy of the concrete model. When an administrator creates a policy in the APIC that represents a configuration, the APIC updates the logical model. The APIC then performs the intermediate step of creating a fully elaborated policy that it pushes into all the switch nodes where the concrete model is updated.
The APIC is responsible for fabric activation, switch firmware management, network policy configuration, and instantiation. While the APIC acts as the centralized policy and network management engine for the fabric, it is completely removed from the data path, including the forwarding topology. Therefore, the fabric can still forward traffic even when communication with the APIC is lost.
More information
................
Various resources exist to start learning ACI, here is a list of interesting articles from the community.
- `Adam Raffe: Learning ACI <https://adamraffe.com/learning-aci/>`_
- `Luca Relandini: ACI for dummies <https://lucarelandini.blogspot.be/2015/03/aci-for-dummies.html>`_
- `Cisco DevNet Learning Labs about ACI <https://learninglabs.cisco.com/labs/tags/ACI>`_
.. _aci_guide_modules:
Using the ACI modules
---------------------
The Ansible ACI modules provide a user-friendly interface to managing your ACI environment using Ansible playbooks.
For instance ensuring that a specific tenant exists, is done using the following Ansible task using the aci_tenant module:
.. code-block:: yaml
- name: Ensure tenant customer-xyz exists
aci_tenant:
host: my-apic-1
username: admin
password: my-password
tenant: customer-xyz
description: Customer XYZ
state: present
A complete list of existing ACI modules is available on the content tab of the `ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_.
If you want to learn how to write your own ACI modules to contribute, look at the :ref:`Developing Cisco ACI modules <aci_dev_guide>` section.
Querying ACI configuration
..........................
A module can also be used to query a specific object.
.. code-block:: yaml
- name: Query tenant customer-xyz
aci_tenant:
host: my-apic-1
username: admin
password: my-password
tenant: customer-xyz
state: query
register: my_tenant
Or query all objects.
.. code-block:: yaml
- name: Query all tenants
aci_tenant:
host: my-apic-1
username: admin
password: my-password
state: query
register: all_tenants
After registering the return values of the aci_tenant task as shown above, you can access all tenant information from variable ``all_tenants``.
Running on the controller locally
.................................
As originally designed, Ansible modules are shipped to and run on the remote target(s), however the ACI modules (like most network-related modules) do not run on the network devices or controller (in this case the APIC), but they talk directly to the APIC's REST interface.
For this very reason, the modules need to run on the local Ansible controller (or are delegated to another system that *can* connect to the APIC).
Gathering facts
```````````````
Because we run the modules on the Ansible controller gathering facts will not work. That is why when using these ACI modules it is mandatory to disable facts gathering. You can do this globally in your ``ansible.cfg`` or by adding ``gather_facts: no`` to every play.
.. code-block:: yaml
:emphasize-lines: 3
- name: Another play in my playbook
hosts: my-apic-1
gather_facts: no
tasks:
- name: Create a tenant
aci_tenant:
...
Delegating to localhost
```````````````````````
So let us assume we have our target configured in the inventory using the FQDN name as the ``ansible_host`` value, as shown below.
.. code-block:: yaml
:emphasize-lines: 3
apics:
my-apic-1:
ansible_host: apic01.fqdn.intra
ansible_user: admin
ansible_password: my-password
One way to set this up is to add to every task the directive: ``delegate_to: localhost``.
.. code-block:: yaml
:emphasize-lines: 8
- name: Query all tenants
aci_tenant:
host: '{{ ansible_host }}'
username: '{{ ansible_user }}'
password: '{{ ansible_password }}'
state: query
delegate_to: localhost
register: all_tenants
If one would forget to add this directive, Ansible will attempt to connect to the APIC using SSH and attempt to copy the module and run it remotely. This will fail with a clear error, yet may be confusing to some.
Using the local connection method
`````````````````````````````````
Another option frequently used, is to tie the ``local`` connection method to this target so that every subsequent task for this target will use the local connection method (hence run it locally, rather than use SSH).
In this case the inventory may look like this:
.. code-block:: yaml
:emphasize-lines: 6
apics:
my-apic-1:
ansible_host: apic01.fqdn.intra
ansible_user: admin
ansible_password: my-password
ansible_connection: local
But used tasks do not need anything special added.
.. code-block:: yaml
- name: Query all tenants
aci_tenant:
host: '{{ ansible_host }}'
username: '{{ ansible_user }}'
password: '{{ ansible_password }}'
state: query
register: all_tenants
.. hint:: For clarity we have added ``delegate_to: localhost`` to all the examples in the module documentation. This helps to ensure first-time users can easily copy&paste parts and make them work with a minimum of effort.
Common parameters
.................
Every Ansible ACI module accepts the following parameters that influence the module's communication with the APIC REST API:
host
Hostname or IP address of the APIC.
port
Port to use for communication. (Defaults to ``443`` for HTTPS, and ``80`` for HTTP)
username
User name used to log on to the APIC. (Defaults to ``admin``)
password
Password for ``username`` to log on to the APIC, using password-based authentication.
private_key
Private key for ``username`` to log on to APIC, using signature-based authentication.
This could either be the raw private key content (include header/footer) or a file that stores the key content.
*New in version 2.5*
certificate_name
Name of the certificate in the ACI Web GUI.
This defaults to either the ``username`` value or the ``private_key`` file base name).
*New in version 2.5*
timeout
Timeout value for socket-level communication.
use_proxy
Use system proxy settings. (Defaults to ``yes``)
use_ssl
Use HTTPS or HTTP for APIC REST communication. (Defaults to ``yes``)
validate_certs
Validate certificate when using HTTPS communication. (Defaults to ``yes``)
output_level
Influence the level of detail ACI modules return to the user. (One of ``normal``, ``info`` or ``debug``) *New in version 2.5*
Proxy support
.............
By default, if an environment variable ``<protocol>_proxy`` is set on the target host, requests will be sent through that proxy. This behaviour can be overridden by setting a variable for this task (see :ref:`playbooks_environment`), or by using the ``use_proxy`` module parameter.
HTTP redirects can redirect from HTTP to HTTPS so ensure that the proxy environment for both protocols is correctly configured.
If proxy support is not needed, but the system may have it configured nevertheless, use the parameter ``use_proxy: no`` to avoid accidental system proxy usage.
.. hint:: Selective proxy support using the ``no_proxy`` environment variable is also supported.
Return values
.............
.. versionadded:: 2.5
The following values are always returned:
current
The resulting state of the managed object, or results of your query.
The following values are returned when ``output_level: info``:
previous
The original state of the managed object (before any change was made).
proposed
The proposed config payload, based on user-supplied values.
sent
The sent config payload, based on user-supplied values and the existing configuration.
The following values are returned when ``output_level: debug`` or ``ANSIBLE_DEBUG=1``:
filter_string
The filter used for specific APIC queries.
method
The HTTP method used for the sent payload. (Either ``GET`` for queries, ``DELETE`` or ``POST`` for changes)
response
The HTTP response from the APIC.
status
The HTTP status code for the request.
url
The url used for the request.
.. note:: The module return values are documented in detail as part of each module's documentation.
More information
................
Various resources exist to start learn more about ACI programmability, we recommend the following links:
- :ref:`Developing Cisco ACI modules <aci_dev_guide>`
- `Jacob McGill: Automating Cisco ACI with Ansible <https://blogs.cisco.com/developer/automating-cisco-aci-with-ansible-eliminates-repetitive-day-to-day-tasks>`_
- `Cisco DevNet Learning Labs about ACI and Ansible <https://learninglabs.cisco.com/labs/tags/ACI,Ansible>`_
.. _aci_guide_auth:
ACI authentication
------------------
Password-based authentication
.............................
If you want to log on using a username and password, you can use the following parameters with your ACI modules:
.. code-block:: yaml
username: admin
password: my-password
Password-based authentication is very simple to work with, but it is not the most efficient form of authentication from ACI's point-of-view as it requires a separate login-request and an open session to work. To avoid having your session time-out and requiring another login, you can use the more efficient Signature-based authentication.
.. note:: Password-based authentication also may trigger anti-DoS measures in ACI v3.1+ that causes session throttling and results in HTTP 503 errors and login failures.
.. warning:: Never store passwords in plain text.
The "Vault" feature of Ansible allows you to keep sensitive data such as passwords or keys in encrypted files, rather than as plain text in your playbooks or roles. These vault files can then be distributed or placed in source control. See :ref:`playbooks_vault` for more information.
Signature-based authentication using certificates
.................................................
.. versionadded:: 2.5
Using signature-based authentication is more efficient and more reliable than password-based authentication.
Generate certificate and private key
````````````````````````````````````
Signature-based authentication requires a (self-signed) X.509 certificate with private key, and a configuration step for your AAA user in ACI. To generate a working X.509 certificate and private key, use the following procedure:
.. code-block:: bash
$ openssl req -new -newkey rsa:1024 -days 36500 -nodes -x509 -keyout admin.key -out admin.crt -subj '/CN=Admin/O=Your Company/C=US'
Configure your local user
`````````````````````````
Perform the following steps:
- Add the X.509 certificate to your ACI AAA local user at :guilabel:`ADMIN` » :guilabel:`AAA`
- Click :guilabel:`AAA Authentication`
- Check that in the :guilabel:`Authentication` field the :guilabel:`Realm` field displays :guilabel:`Local`
- Expand :guilabel:`Security Management` » :guilabel:`Local Users`
- Click the name of the user you want to add a certificate to, in the :guilabel:`User Certificates` area
- Click the :guilabel:`+` sign and in the :guilabel:`Create X509 Certificate` enter a certificate name in the :guilabel:`Name` field
* If you use the basename of your private key here, you don't need to enter ``certificate_name`` in Ansible
- Copy and paste your X.509 certificate in the :guilabel:`Data` field.
You can automate this by using the following Ansible task:
.. code-block:: yaml
- name: Ensure we have a certificate installed
aci_aaa_user_certificate:
host: my-apic-1
username: admin
password: my-password
aaa_user: admin
certificate_name: admin
certificate: "{{ lookup('file', 'pki/admin.crt') }}" # This will read the certificate data from a local file
.. note:: Signature-based authentication only works with local users.
Use signature-based authentication with Ansible
```````````````````````````````````````````````
You need the following parameters with your ACI module(s) for it to work:
.. code-block:: yaml
:emphasize-lines: 2,3
username: admin
private_key: pki/admin.key
certificate_name: admin # This could be left out !
or you can use the private key content:
.. code-block:: yaml
:emphasize-lines: 2,3
username: admin
private_key: |
-----BEGIN PRIVATE KEY-----
<<your private key content>>
-----END PRIVATE KEY-----
certificate_name: admin # This could be left out !
.. hint:: If you use a certificate name in ACI that matches the private key's basename, you can leave out the ``certificate_name`` parameter like the example above.
Using Ansible Vault to encrypt the private key
``````````````````````````````````````````````
.. versionadded:: 2.8
To start, encrypt the private key and give it a strong password.
.. code-block:: bash
ansible-vault encrypt admin.key
Use a text editor to open the private-key. You should have an encrypted cert now.
.. code-block:: bash
$ANSIBLE_VAULT;1.1;AES256
56484318584354658465121889743213151843149454864654151618131547984132165489484654
45641818198456456489479874513215489484843614848456466655432455488484654848489498
....
Copy and paste the new encrypted cert into your playbook as a new variable.
.. code-block:: yaml
private_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
56484318584354658465121889743213151843149454864654151618131547984132165489484654
45641818198456456489479874513215489484843614848456466655432455488484654848489498
....
Use the new variable for the private_key:
.. code-block:: yaml
username: admin
private_key: "{{ private_key }}"
certificate_name: admin # This could be left out !
When running the playbook, use "--ask-vault-pass" to decrypt the private key.
.. code-block:: bash
ansible-playbook site.yaml --ask-vault-pass
More information
````````````````
- Detailed information about Signature-based Authentication is available from `Cisco APIC Signature-Based Transactions <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/kb/b_KB_Signature_Based_Transactions.html>`_.
- More information on Ansible Vault can be found on the :ref:`Ansible Vault <vault>` page.
.. _aci_guide_rest:
Using ACI REST with Ansible
---------------------------
While already a lot of ACI modules exists in the Ansible distribution, and the most common actions can be performed with these existing modules, there's always something that may not be possible with off-the-shelf modules.
The aci_rest module provides you with direct access to the APIC REST API and enables you to perform any task not already covered by the existing modules. This may seem like a complex undertaking, but you can generate the needed REST payload for any action performed in the ACI web interface effortlessly.
Built-in idempotency
....................
Because the APIC REST API is intrinsically idempotent and can report whether a change was made, the aci_rest module automatically inherits both capabilities and is a first-class solution for automating your ACI infrastructure. As a result, users that require more powerful low-level access to their ACI infrastructure don't have to give up on idempotency and don't have to guess whether a change was performed when using the aci_rest module.
Using the aci_rest module
.........................
The aci_rest module accepts the native XML and JSON payloads, but additionally accepts inline YAML payload (structured like JSON). The XML payload requires you to use a path ending with ``.xml`` whereas JSON or YAML require the path to end with ``.json``.
When you're making modifications, you can use the POST or DELETE methods, whereas doing just queries require the GET method.
For instance, if you would like to ensure a specific tenant exists on ACI, these below four examples are functionally identical:
**XML** (Native ACI REST)
.. code-block:: yaml
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.xml
content: |
<fvTenant name="customer-xyz" descr="Customer XYZ"/>
**JSON** (Native ACI REST)
.. code-block:: yaml
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.json
content:
{
"fvTenant": {
"attributes": {
"name": "customer-xyz",
"descr": "Customer XYZ"
}
}
}
**YAML** (Ansible-style REST)
.. code-block:: yaml
- aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: post
path: /api/mo/uni.json
content:
fvTenant:
attributes:
name: customer-xyz
descr: Customer XYZ
**Ansible task** (Dedicated module)
.. code-block:: yaml
- aci_tenant:
host: my-apic-1
private_key: pki/admin.key
tenant: customer-xyz
description: Customer XYZ
state: present
.. hint:: The XML format is more practical when there is a need to template the REST payload (inline), but the YAML format is more convenient for maintaining your infrastructure-as-code and feels more naturally integrated with Ansible playbooks. The dedicated modules offer a more simple, abstracted, but also a more limited experience. Use what feels best for your use-case.
More information
................
Plenty of resources exist to learn about ACI's APIC REST interface, we recommend the links below:
- `The ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
- `APIC REST API Configuration Guide <https://www.cisco.com/c/en/us/td/docs/switches/datacenter/aci/apic/sw/2-x/rest_cfg/2_1_x/b_Cisco_APIC_REST_API_Configuration_Guide.html>`_ -- Detailed guide on how the APIC REST API is designed and used, incl. many examples
- `APIC Management Information Model reference <https://developer.cisco.com/docs/apic-mim-ref/>`_ -- Complete reference of the APIC object model
- `Cisco DevNet Learning Labs about ACI and REST <https://learninglabs.cisco.com/labs/tags/ACI,REST>`_
.. _aci_guide_ops:
Operational examples
--------------------
Here is a small overview of useful operational tasks to reuse in your playbooks.
Feel free to contribute more useful snippets.
Waiting for all controllers to be ready
.......................................
You can use the below task after you started to build your APICs and configured the cluster to wait until all the APICs have come online. It will wait until the number of controllers equals the number listed in the ``apic`` inventory group.
.. code-block:: yaml
- name: Waiting for all controllers to be ready
aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: get
path: /api/node/class/topSystem.json?query-target-filter=eq(topSystem.role,"controller")
register: topsystem
until: topsystem|success and topsystem.totalCount|int >= groups['apic']|count >= 3
retries: 20
delay: 30
Waiting for cluster to be fully-fit
...................................
The below example waits until the cluster is fully-fit. In this example you know the number of APICs in the cluster and you verify each APIC reports a 'fully-fit' status.
.. code-block:: yaml
- name: Waiting for cluster to be fully-fit
aci_rest:
host: my-apic-1
private_key: pki/admin.key
method: get
path: /api/node/class/infraWiNode.json?query-target-filter=wcard(infraWiNode.dn,"topology/pod-1/node-1/av")
register: infrawinode
until: >
infrawinode|success and
infrawinode.totalCount|int >= groups['apic']|count >= 3 and
infrawinode.imdata[0].infraWiNode.attributes.health == 'fully-fit' and
infrawinode.imdata[1].infraWiNode.attributes.health == 'fully-fit' and
infrawinode.imdata[2].infraWiNode.attributes.health == 'fully-fit'
retries: 30
delay: 30
.. _aci_guide_errors:
APIC error messages
-------------------
The following error messages may occur and this section can help you understand what exactly is going on and how to fix/avoid them.
APIC Error 122: unknown managed object class 'polUni'
In case you receive this error while you are certain your aci_rest payload and object classes are seemingly correct, the issue might be that your payload is not in fact correct JSON (for example, the sent payload is using single quotes, rather than double quotes), and as a result the APIC is not correctly parsing your object classes from the payload. One way to avoid this is by using a YAML or an XML formatted payload, which are easier to construct correctly and modify later.
APIC Error 400: invalid data at line '1'. Attributes are missing, tag 'attributes' must be specified first, before any other tag
Although the JSON specification allows unordered elements, the APIC REST API requires that the JSON ``attributes`` element precede the ``children`` array or other elements. So you need to ensure that your payload conforms to this requirement. Sorting your dictionary keys will do the trick just fine. If you don't have any attributes, it may be necessary to add: ``attributes: {}`` as the APIC does expect the entry to precede any ``children``.
APIC Error 801: property descr of uni/tn-TENANT/ap-AP failed validation for value 'A "legacy" network'
Some values in the APIC have strict format-rules to comply to, and the internal APIC validation check for the provided value failed. In the above case, the ``description`` parameter (internally known as ``descr``) only accepts values conforming to Regex: ``[a-zA-Z0-9\\!#$%()*,-./:;@ _{|}~?&+]+``, in general it must not include quotes or square brackets.
.. _aci_guide_known_issues:
Known issues
------------
The aci_rest module is a wrapper around the APIC REST API. As a result any issues related to the APIC will be reflected in the use of this module.
All below issues either have been reported to the vendor, and most can simply be avoided.
Too many consecutive API calls may result in connection throttling
Starting with ACI v3.1 the APIC will actively throttle password-based authenticated connection rates over a specific threshold. This is as part of an anti-DDOS measure but can act up when using Ansible with ACI using password-based authentication. Currently, one solution is to increase this threshold within the nginx configuration, but using signature-based authentication is recommended.
**NOTE:** It is advisable to use signature-based authentication with ACI as it not only prevents connection-throttling, but also improves general performance when using the ACI modules.
Specific requests may not reflect changes correctly (`#35401 <https://github.com/ansible/ansible/issues/35041>`_)
There is a known issue where specific requests to the APIC do not properly reflect changed in the resulting output, even when we request those changes explicitly from the APIC. In one instance using the path ``api/node/mo/uni/infra.xml`` fails, where ``api/node/mo/uni/infra/.xml`` does work correctly.
**NOTE:** A workaround is to register the task return values (for example, ``register: this``) and influence when the task should report a change by adding: ``changed_when: this.imdata != []``.
Specific requests are known to not be idempotent (`#35050 <https://github.com/ansible/ansible/issues/35050>`_)
The behaviour of the APIC is inconsistent to the use of ``status="created"`` and ``status="deleted"``. The result is that when you use ``status="created"`` in your payload the resulting tasks are not idempotent and creation will fail when the object was already created. However this is not the case with ``status="deleted"`` where such call to an non-existing object does not cause any failure whatsoever.
**NOTE:** A workaround is to avoid using ``status="created"`` and instead use ``status="modified"`` when idempotency is essential to your workflow..
Setting user password is not idempotent (`#35544 <https://github.com/ansible/ansible/issues/35544>`_)
Due to an inconsistency in the APIC REST API, a task that sets the password of a locally-authenticated user is not idempotent. The APIC will complain with message ``Password history check: user dag should not use previous 5 passwords``.
**NOTE:** There is no workaround for this issue.
.. _aci_guide_community:
ACI Ansible community
---------------------
If you have specific issues with the ACI modules, or a feature request, or you like to contribute to the ACI project by proposing changes or documentation updates, look at the Ansible Community wiki ACI page at: https://github.com/ansible/community/wiki/Network:-ACI
You will find our roadmap, an overview of open ACI issues and pull-requests, and more information about who we are. If you have an interest in using ACI with Ansible, feel free to join! We occasionally meet online (on the #ansible-network chat channel, using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) to track progress and prepare for new Ansible releases.
.. seealso::
`ACI collection on Ansible Galaxy <https://galaxy.ansible.com/cisco/aci>`_
View the content tab for a complete list of supported ACI modules.
:ref:`Developing Cisco ACI modules <aci_dev_guide>`
A walkthrough on how to develop new Cisco ACI modules to contribute back.
`ACI community <https://github.com/ansible/community/wiki/Network:-ACI>`_
The Ansible ACI community wiki page, includes roadmap, ideas and development documentation.
:ref:`network_guide`
A detailed guide on how to use Ansible for automating network infrastructure.
`Network Working Group <https://github.com/ansible/community/tree/main/group-network>`_
The Ansible Network community page, includes contact information and meeting information.
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,922 |
Docs: scenario guides: Replace yes/no booleans with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/scenario_guides/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 16 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/scenario_guides/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78922
|
https://github.com/ansible/ansible/pull/78934
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
|
5b333c9665182e20c2dfbed64006ced12e897ccb
| 2022-09-29T14:10:23Z |
python
| 2022-10-03T20:40:12Z |
docs/docsite/rst/scenario_guides/guide_azure.rst
|
Microsoft Azure Guide
=====================
.. important::
Red Hat Ansible Automation Platform will soon be available on Microsoft Azure. `Sign up to preview the experience <https://www.redhat.com/en/engage/ansible-microsoft-azure-e-202110220735>`_.
Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create
and orchestrate infrastructure on the Microsoft Azure Cloud.
Requirements
------------
Using the Azure Resource Manager modules requires having specific Azure SDK modules
installed on the host running Ansible.
.. code-block:: bash
$ pip install 'ansible[azure]'
If you are running Ansible from source, you can install the dependencies from the
root directory of the Ansible repo.
.. code-block:: bash
$ pip install .[azure]
You can also directly run Ansible in `Azure Cloud Shell <https://shell.azure.com>`_, where Ansible is pre-installed.
Authenticating with Azure
-------------------------
Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
* Active Directory Username/Password
* Service Principal Credentials
Follow the directions for the strategy you wish to use, then proceed to `Providing Credentials to Azure Modules`_ for
instructions on how to actually use the modules and authenticate with the Azure API.
Using Service Principal
.......................
There is now a detailed official tutorial describing `how to create a service principal <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal>`_.
After stepping through the tutorial you will have:
* Your Client ID, which is found in the "client id" box in the "Configure" page of your application in the Azure portal
* Your Secret key, generated when you created the application. You cannot show the key after creation.
If you lost the key, you must create a new one in the "Configure" page of your application.
* And finally, a tenant ID. It's a UUID (for example, ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your
application. You will find it in the URL from within the Azure portal, or in the "view endpoints" of any given URL.
Using Active Directory Username/Password
........................................
To create an Active Directory username/password:
* Connect to the Azure Classic Portal with your admin account
* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
* Go to Settings - Administrators
* Click on Add and enter the email of the new user.
* Check the checkbox of the subscription you want to test with this user.
* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the
temporary password for OAuth login.
Providing Credentials to Azure Modules
......................................
The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible AWX or Jenkins, you will
most likely want to use environment variables. For local development you may wish to store your credentials in a file
within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The
order of precedence is parameters, then environment variables, and finally a file found in your home directory.
Using Environment Variables
```````````````````````````
To pass service principal credentials via the environment, define the following variables:
* AZURE_CLIENT_ID
* AZURE_SECRET
* AZURE_SUBSCRIPTION_ID
* AZURE_TENANT
To pass Active Directory username/password via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_SUBSCRIPTION_ID
To pass Active Directory username/password in ADFS via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_CLIENT_ID
* AZURE_TENANT
* AZURE_ADFS_AUTHORITY_URL
"AZURE_ADFS_AUTHORITY_URL" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Storing in a File
`````````````````
When working in a development environment, it may be desirable to store credentials in a file. The modules will look
for credentials in ``$HOME/.azure/credentials``. This file is an ini style file. It will look as follows:
.. code-block:: ini
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
.. note:: If your secret values contain non-ASCII characters, you must `URL Encode <https://www.w3schools.com/tags/ref_urlencode.asp>`_ them to avoid login errors.
It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each
section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the
environment or pass a profile parameter to specify a specific profile.
Passing as Parameters
`````````````````````
If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
* client_id
* secret
* subscription_id
* tenant
Or, pass the following parameters for Active Directory username/password:
* ad_user
* password
* subscription_id
Or, pass the following parameters for ADFS username/password:
* ad_user
* password
* client_id
* tenant
* adfs_authority_url
"adfs_authority_url" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Other Cloud Environments
------------------------
To use an Azure Cloud other than the default public cloud (for example, Azure China Cloud, Azure US Government Cloud, Azure Stack),
pass the "cloud_environment" argument to modules, configure it in a credential profile, or set the "AZURE_CLOUD_ENVIRONMENT"
environment variable. The value is either a cloud name as defined by the Azure Python SDK (for example, "AzureChinaCloud",
"AzureUSGovernment"; defaults to "AzureCloud") or an Azure metadata discovery URL (for Azure Stack).
Creating Virtual Machines
-------------------------
There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine module. We can either create
a storage account, network interface, security group and public IP address and pass the names of these objects to the
module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
Creating Individual Components
..............................
An Azure module is available to help you create a storage account, virtual network, subnet, network interface,
security group and public IP. Here is a full example of creating each of these and passing the names to the
``azure.azcollection.azure_rm_virtualmachine`` module at the end:
.. code-block:: yaml
- name: Create storage account
azure.azcollection.azure_rm_storageaccount:
resource_group: Testing
name: testaccount001
account_type: Standard_LRS
- name: Create virtual network
azure.azcollection.azure_rm_virtualnetwork:
resource_group: Testing
name: testvn001
address_prefixes: "10.10.0.0/16"
- name: Add subnet
azure.azcollection.azure_rm_subnet:
resource_group: Testing
name: subnet001
address_prefix: "10.10.0.0/24"
virtual_network: testvn001
- name: Create public ip
azure.azcollection.azure_rm_publicipaddress:
resource_group: Testing
allocation_method: Static
name: publicip001
- name: Create security group that allows SSH
azure.azcollection.azure_rm_securitygroup:
resource_group: Testing
name: secgroup001
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 101
direction: Inbound
- name: Create NIC
azure.azcollection.azure_rm_networkinterface:
resource_group: Testing
name: testnic001
virtual_network: testvn001
subnet: subnet001
public_ip_name: publicip001
security_group: secgroup001
- name: Create virtual machine
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm001
vm_size: Standard_D1
storage_account: testaccount001
storage_container: testvm001
storage_blob: testvm001.vhd
admin_username: admin
admin_password: Password!
network_interfaces: testnic001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example.
See each individual module for further details and examples.
Creating a Virtual Machine with Default Options
...............................................
If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only
caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a
virtual network already with an existing subnet, you can run the following to create a VM:
.. code-block:: yaml
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
vm_size: Standard_D1
admin_username: chouseknecht
ssh_password_enabled: false
ssh_public_keys: "{{ ssh_keys }}"
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Creating a Virtual Machine in Availability Zones
..................................................
If you want to create a VM in an availability zone,
consider the following:
* Both OS disk and data disk must be a 'managed disk', not an 'unmanaged disk'.
* When creating a VM with the ``azure.azcollection.azure_rm_virtualmachine`` module,
you need to explicitly set the ``managed_disk_type`` parameter
to change the OS disk to a managed disk.
Otherwise, the OS disk becomes an unmanaged disk.
* When you create a data disk with the ``azure.azcollection.azure_rm_manageddisk`` module,
you need to explicitly specify the ``storage_account_type`` parameter
to make it a managed disk.
Otherwise, the data disk will be an unmanaged disk.
* A managed disk does not require a storage account or a storage container,
unlike an unmanaged disk.
In particular, note that once a VM is created on an unmanaged disk,
an unnecessary storage container named "vhds" is automatically created.
* When you create an IP address with the ``azure.azcollection.azure_rm_publicipaddress`` module,
you must set the ``sku`` parameter to ``standard``.
Otherwise, the IP address cannot be used in an availability zone.
Dynamic Inventory Script
------------------------
If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
line options execute the following:
.. code-block:: bash
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
$ ./azure_rm.py --help
As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing
all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of
Azure resource groups, or even down to a specific host.
For a given host, the inventory script provides the following host variables:
.. code-block:: JSON
{
"ansible_host": "XXX.XXX.XXX.XXX",
"computer_name": "computer_name2",
"fqdn": null,
"id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
"image": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "latest"
},
"location": "westus",
"mac_address": "00-00-5E-00-53-FE",
"name": "object-name",
"network_interface": "interface-name",
"network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
"network_security_group": null,
"network_security_group_id": null,
"os_disk": {
"name": "object-name",
"operating_system_type": "Linux"
},
"plan": null,
"powerstate": "running",
"private_ip": "172.26.3.6",
"private_ip_alloc_method": "Static",
"provisioning_state": "Succeeded",
"public_ip": "XXX.XXX.XXX.XXX",
"public_ip_alloc_method": "Static",
"public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
"public_ip_name": "object-name",
"resource_group": "galaxy-production",
"security_group": "object-name",
"security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
"tags": {
"db": "mysql"
},
"type": "Microsoft.Compute/virtualMachines",
"virtual_machine_size": "Standard_DS4"
}
Host Groups
...........
By default hosts are grouped by:
* azure (all hosts)
* location name
* resource group name
* security group name
* tag key
* tag key_value
* os_disk operating_system_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables or creating an
azure_rm.ini file in your current working directory.
NOTE: An .ini file will take precedence over environment variables.
NOTE: The name of the .ini file is the basename of the inventory script (in other words, 'azure_rm') with a '.ini'
extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in
the same directory.
Control grouping using the following variables defined in the environment:
* AZURE_GROUP_BY_RESOURCE_GROUP=yes
* AZURE_GROUP_BY_LOCATION=yes
* AZURE_GROUP_BY_SECURITY_GROUP=yes
* AZURE_GROUP_BY_TAG=yes
* AZURE_GROUP_BY_OS_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:
* AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
Select hosts for specific tag key by assigning a comma separated list of tag keys to:
* AZURE_TAGS=key1,key2,key3
Select hosts for specific locations by assigning a comma separated list of locations to:
* AZURE_LOCATIONS=eastus,eastus2,westus
Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
* AZURE_TAGS=key1:value1,key2:value2
If you don't need the powerstate, you can improve performance by turning off powerstate fetching:
* AZURE_INCLUDE_POWERSTATE=no
A sample azure_rm.ini file is included along with the inventory script in
`here <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.ini>`_.
An .ini file will contain the following:
.. code-block:: ini
[azure]
# Control which resource groups are included. By default all resources groups are included.
# Set resource_groups to a comma separated list of resource groups names.
#resource_groups=
# Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
#tags=
# Control which locations are included. Set locations to a comma separated list of locations.
#locations=
# Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
# Valid values: yes, no, true, false, True, False, 0, 1.
include_powerstate=yes
# Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
Examples
........
Here are some examples using the inventory script:
.. code-block:: bash
# Download inventory script
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py windows -m win_ping
# Execute ping on all Linux instances
$ ansible -i azure_rm.py linux -m ping
# Use the inventory script to print instance specific information
$ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
# Use the inventory script with ansible-playbook
$ ansible-playbook -i ./azure_rm.py test_playbook.yml
Here is a simple playbook to exercise the Azure inventory script:
.. code-block:: yaml
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: no
tasks:
- debug:
msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
You can execute the playbook with something like:
.. code-block:: bash
$ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml
Disabling certificate validation on Azure endpoints
...................................................
When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
the "cert_validation_mode" value in a credential profile, via the "AZURE_CERT_VALIDATION_MODE" environment variable, or
by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
which takes precedence over the environment value.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,922 |
Docs: scenario guides: Replace yes/no booleans with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/scenario_guides/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 16 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/scenario_guides/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78922
|
https://github.com/ansible/ansible/pull/78934
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
|
5b333c9665182e20c2dfbed64006ced12e897ccb
| 2022-09-29T14:10:23Z |
python
| 2022-10-03T20:40:12Z |
docs/docsite/rst/scenario_guides/guide_gce.rst
|
Google Cloud Platform Guide
===========================
.. gce_intro:
Introduction
--------------------------
Ansible + Google have been working together on a set of auto-generated
Ansible modules designed to consistently and comprehensively cover the entirety
of the Google Cloud Platform (GCP).
Ansible contains modules for managing Google Cloud Platform resources,
including creating instances, controlling network access, working with
persistent disks, managing load balancers, and a lot more.
These new modules can be found under a new consistent name scheme "gcp_*"
(Note: gcp_target_proxy and gcp_url_map are legacy modules, despite the "gcp_*"
name. Please use gcp_compute_target_proxy and gcp_compute_url_map instead).
Additionally, the gcp_compute inventory plugin can discover all
Google Compute Engine (GCE) instances
and make them automatically available in your Ansible inventory.
You may see a collection of other GCP modules that do not conform to this
naming convention. These are the original modules primarily developed by the
Ansible community. You will find some overlapping functionality such as with
the "gce" module and the new "gcp_compute_instance" module. Either can be
used, but you may experience issues trying to use them together.
While the community GCP modules are not going away, Google is investing effort
into the new "gcp_*" modules. Google is committed to ensuring the Ansible
community has a great experience with GCP and therefore recommends adopting
these new modules if possible.
Requisites
---------------
The GCP modules require both the ``requests`` and the
``google-auth`` libraries to be installed.
.. code-block:: bash
$ pip install requests google-auth
Alternatively for RHEL / CentOS, the ``python-requests`` package is also
available to satisfy ``requests`` libraries.
.. code-block:: bash
$ yum install python-requests
Credentials
-----------
It's easy to create a GCP account with credentials for Ansible. You have multiple options to
get your credentials - here are two of the most common options:
* Service Accounts (Recommended): Use JSON service accounts with specific permissions.
* Machine Accounts: Use the permissions associated with the GCP Instance you're using Ansible on.
For the following examples, we'll be using service account credentials.
To work with the GCP modules, you'll first need to get some credentials in the
JSON format:
1. `Create a Service Account <https://developers.google.com/identity/protocols/OAuth2ServiceAccount#creatinganaccount>`_
2. `Download JSON credentials <https://support.google.com/cloud/answer/6158849?hl=en&ref_topic=6262490#serviceaccounts>`_
Once you have your credentials, there are two different ways to provide them to Ansible:
* by specifying them directly as module parameters
* by setting environment variables
Providing Credentials as Module Parameters
``````````````````````````````````````````
For the GCE modules you can specify the credentials as arguments:
* ``auth_kind``: type of authentication being used (choices: machineaccount, serviceaccount, application)
* ``service_account_email``: email associated with the project
* ``service_account_file``: path to the JSON credentials file
* ``project``: id of the project
* ``scopes``: The specific scopes that you want the actions to use.
For example, to create a new IP address using the ``gcp_compute_address`` module,
you can use the following configuration:
.. code-block:: yaml
- name: Create IP address
hosts: localhost
gather_facts: no
vars:
service_account_file: /home/my_account.json
project: my-project
auth_kind: serviceaccount
scopes:
- https://www.googleapis.com/auth/compute
tasks:
- name: Allocate an IP Address
gcp_compute_address:
state: present
name: 'test-address1'
region: 'us-west1'
project: "{{ project }}"
auth_kind: "{{ auth_kind }}"
service_account_file: "{{ service_account_file }}"
scopes: "{{ scopes }}"
Providing Credentials as Environment Variables
``````````````````````````````````````````````
Set the following environment variables before running Ansible in order to configure your credentials:
.. code-block:: bash
GCP_AUTH_KIND
GCP_SERVICE_ACCOUNT_EMAIL
GCP_SERVICE_ACCOUNT_FILE
GCP_SCOPES
GCE Dynamic Inventory
---------------------
The best way to interact with your hosts is to use the gcp_compute inventory plugin, which dynamically queries GCE and tells Ansible what nodes can be managed.
To be able to use this GCE dynamic inventory plugin, you need to enable it first by specifying the following in the ``ansible.cfg`` file:
.. code-block:: ini
[inventory]
enable_plugins = gcp_compute
Then, create a file that ends in ``.gcp.yml`` in your root directory.
The gcp_compute inventory script takes in the same authentication information as any module.
Here's an example of a valid inventory file:
.. code-block:: yaml
plugin: gcp_compute
projects:
- graphite-playground
auth_kind: serviceaccount
service_account_file: /home/alexstephen/my_account.json
Executing ``ansible-inventory --list -i <filename>.gcp.yml`` will create a list of GCP instances that are ready to be configured using Ansible.
Create an instance
``````````````````
The full range of GCP modules provide the ability to create a wide variety of
GCP resources with the full support of the entire GCP API.
The following playbook creates a GCE Instance. This instance relies on other GCP
resources like Disk. By creating other resources separately, we can give as
much detail as necessary about how we want to configure the other resources, for example
formatting of the Disk. By registering it to a variable, we can simply insert the
variable into the instance task. The gcp_compute_instance module will figure out the
rest.
.. code-block:: yaml
- name: Create an instance
hosts: localhost
gather_facts: no
vars:
gcp_project: my-project
gcp_cred_kind: serviceaccount
gcp_cred_file: /home/my_account.json
zone: "us-central1-a"
region: "us-central1"
tasks:
- name: create a disk
gcp_compute_disk:
name: 'disk-instance'
size_gb: 50
source_image: 'projects/ubuntu-os-cloud/global/images/family/ubuntu-1604-lts'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: disk
- name: create a address
gcp_compute_address:
name: 'address-instance'
region: "{{ region }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
state: present
register: address
- name: create a instance
gcp_compute_instance:
state: present
name: test-vm
machine_type: n1-standard-1
disks:
- auto_delete: true
boot: true
source: "{{ disk }}"
network_interfaces:
- network: null # use default
access_configs:
- name: 'External NAT'
nat_ip: "{{ address }}"
type: 'ONE_TO_ONE_NAT'
zone: "{{ zone }}"
project: "{{ gcp_project }}"
auth_kind: "{{ gcp_cred_kind }}"
service_account_file: "{{ gcp_cred_file }}"
scopes:
- https://www.googleapis.com/auth/compute
register: instance
- name: Wait for SSH to come up
wait_for: host={{ address.address }} port=22 delay=10 timeout=60
- name: Add host to groupname
add_host: hostname={{ address.address }} groupname=new_instances
- name: Manage new instances
hosts: new_instances
connection: ssh
become: True
roles:
- base_configuration
- production_server
Note that use of the "add_host" module above creates a temporary, in-memory group. This means that a play in the same playbook can then manage machines
in the 'new_instances' group, if so desired. Any sort of arbitrary configuration is possible at this point.
For more information about Google Cloud, please visit the `Google Cloud website <https://cloud.google.com>`_.
Migration Guides
----------------
gce.py -> gcp_compute_instance.py
`````````````````````````````````
As of Ansible 2.8, we're encouraging everyone to move from the ``gce`` module to the
``gcp_compute_instance`` module. The ``gcp_compute_instance`` module has better
support for all of GCP's features, fewer dependencies, more flexibility, and
better supports GCP's authentication systems.
The ``gcp_compute_instance`` module supports all of the features of the ``gce``
module (and more!). Below is a mapping of ``gce`` fields over to
``gcp_compute_instance`` fields.
============================ ========================================== ======================
gce.py gcp_compute_instance.py Notes
============================ ========================================== ======================
state state/status State on gce has multiple values: "present", "absent", "stopped", "started", "terminated". State on gcp_compute_instance is used to describe if the instance exists (present) or does not (absent). Status is used to describe if the instance is "started", "stopped" or "terminated".
image disks[].initialize_params.source_image You'll need to create a single disk using the disks[] parameter and set it to be the boot disk (disks[].boot = true)
image_family disks[].initialize_params.source_image See above.
external_projects disks[].initialize_params.source_image The name of the source_image will include the name of the project.
instance_names Use a loop or multiple tasks. Using loops is a more Ansible-centric way of creating multiple instances and gives you the most flexibility.
service_account_email service_accounts[].email This is the service_account email address that you want the instance to be associated with. It is not the service_account email address that is used for the credentials necessary to create the instance.
service_account_permissions service_accounts[].scopes These are the permissions you want to grant to the instance.
pem_file Not supported. We recommend using JSON service account credentials instead of PEM files.
credentials_file service_account_file
project_id project
name name This field does not accept an array of names. Use a loop to create multiple instances.
num_instances Use a loop For maximum flexibility, we're encouraging users to use Ansible features to create multiple instances, rather than letting the module do it for you.
network network_interfaces[].network
subnetwork network_interfaces[].subnetwork
persistent_boot_disk disks[].type = 'PERSISTENT'
disks disks[]
ip_forward can_ip_forward
external_ip network_interfaces[].access_configs.nat_ip This field takes multiple types of values. You can create an IP address with ``gcp_compute_address`` and place the name/output of the address here. You can also place the string value of the IP address's GCP name or the actual IP address.
disks_auto_delete disks[].auto_delete
preemptible scheduling.preemptible
disk_size disks[].initialize_params.disk_size_gb
============================ ========================================== ======================
An example playbook is below:
.. code:: yaml
gcp_compute_instance:
name: "{{ item }}"
machine_type: n1-standard-1
... # any other settings
zone: us-central1-a
project: "my-project"
auth_kind: "service_account_file"
service_account_file: "~/my_account.json"
state: present
loop:
- instance-1
- instance-2
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,922 |
Docs: scenario guides: Replace yes/no booleans with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/scenario_guides/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 16 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/scenario_guides/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78922
|
https://github.com/ansible/ansible/pull/78934
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
|
5b333c9665182e20c2dfbed64006ced12e897ccb
| 2022-09-29T14:10:23Z |
python
| 2022-10-03T20:40:12Z |
docs/docsite/rst/scenario_guides/guide_rax.rst
|
Rackspace Cloud Guide
=====================
.. _rax_introduction:
Introduction
````````````
.. note:: Rackspace functionality in Ansible is not maintained and users should consider the `OpenStack collection <https://galaxy.ansible.com/openstack/cloud>`_ instead.
Ansible contains a number of core modules for interacting with Rackspace Cloud.
The purpose of this section is to explain how to put Ansible modules together
(and use inventory scripts) to use Ansible in a Rackspace Cloud context.
Prerequisites for using the rax modules are minimal. In addition to ansible itself,
all of the modules require and are tested against pyrax 1.5 or higher.
You'll need this Python module installed on the execution host.
``pyrax`` is not currently available in many operating system
package repositories, so you will likely need to install it via pip:
.. code-block:: bash
$ pip install pyrax
Ansible creates an implicit localhost that executes in the same context as the ``ansible-playbook`` and the other CLI tools.
If for any reason you need or want to have it in your inventory you should do something like the following:
.. code-block:: ini
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python2
For more information see :ref:`Implicit Localhost <implicit_localhost>`
In playbook steps, we'll typically be using the following pattern:
.. code-block:: yaml
- hosts: localhost
gather_facts: False
tasks:
.. _credentials_file:
Credentials File
````````````````
The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
.. code-block:: ini
[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e
Setting the environment parameter ``RAX_CREDS_FILE`` to the path of this file will help Ansible find how to load
this information.
More information about this credentials file can be found at
https://github.com/pycontribs/pyrax/blob/master/docs/getting_started.md#authenticating
.. _virtual_environment:
Running from a Python Virtual Environment (Optional)
++++++++++++++++++++++++++++++++++++++++++++++++++++
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
.. code-block:: ini
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
.. note::
pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
.. _provisioning:
Provisioning
````````````
Now for the fun parts.
The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
- Avoiding installing the pyrax library on remote nodes
- No need to encrypt and distribute credentials to remote nodes
- Speed and simplicity
.. note::
Authentication with the Rackspace-related modules is handled by either
specifying your username and API key as environment variables or passing
them as module arguments, or by specifying the location of a credentials
file.
Here is a basic example of provisioning an instance in ad hoc mode:
.. code-block:: bash
$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
Here's what it would look like in a playbook, assuming the parameters were defined in variables:
.. code-block:: yaml
tasks:
- name: Provision a set of instances
rax:
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
count: "{{ rax_count }}"
group: "{{ group }}"
wait: yes
register: rax
delegate_to: localhost
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
.. code-block:: yaml
- name: Add the instances we created (by public IP) to the group 'raxhosts'
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
groups: raxhosts
loop: "{{ rax.success }}"
when: rax.action == 'create'
With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
.. code-block:: yaml
- name: Configuration play
hosts: raxhosts
user: root
roles:
- ntp
- webserver
The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
to the next section.
.. _host_inventory:
Host Inventory
``````````````
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
.. _raxpy:
rax.py
++++++
To use the Rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentials file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
.. note:: Users of :ref:`ansible_platform` will note that dynamic inventory is natively supported by the controller in the platform, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
of an individual file, will cause ansible to evaluate each file in that directory for inventory.
Let's test our inventory script to see if it can talk to Rackspace Cloud.
.. code-block:: bash
$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the
following information, which will be utilized for inventory and variables.
.. code-block:: json
{
"ORD": [
"test"
],
"_meta": {
"hostvars": {
"test": {
"ansible_host": "198.51.100.1",
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
}
}
}
}
.. _standard_inventory:
Standard Inventory
++++++++++++++++++
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.
This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
.. code-block:: ini
[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
.. code-block:: yaml
- name: Gather info about servers
hosts: test_servers
gather_facts: False
tasks:
- name: Get facts about servers
rax_facts:
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: "{{ rax_region }}"
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
While you don't need to know how it works, it may be interesting to know what kind of variables are returned.
The ``rax_facts`` module provides facts as following, which match the ``rax.py`` inventory script:
.. code-block:: json
{
"ansible_facts": {
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
},
"changed": false
}
Use Cases
`````````
This section covers some additional usage examples built around a specific use case.
.. _network_and_server:
Network and Server
++++++++++++++++++
Create an isolated cloud network and build a server
.. code-block:: yaml
- name: Build Servers on an Isolated Network
hosts: localhost
gather_facts: False
tasks:
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
region: IAD
state: present
delegate_to: localhost
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: 2
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- my-net
region: IAD
state: present
count: 5
exact_count: yes
group: web
wait: yes
wait_timeout: 360
register: rax
delegate_to: localhost
.. _complete_environment:
Complete Environment
++++++++++++++++++++
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
.. code-block:: yaml
---
- name: Build environment
hosts: localhost
gather_facts: False
tasks:
- name: Load Balancer create request
rax_clb:
credentials: ~/.raxpub
name: my-lb
port: 80
protocol: HTTP
algorithm: ROUND_ROBIN
type: PUBLIC
timeout: 30
region: IAD
wait: yes
state: present
meta:
app: my-cool-app
register: clb
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
region: IAD
register: network
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- private
- my-net
region: IAD
state: present
count: 5
exact_count: yes
group: web
wait: yes
register: rax
- name: Add servers to web host group
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
groups: web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Add servers to Load balancer
rax_clb_nodes:
credentials: ~/.raxpub
load_balancer_id: "{{ clb.balancer.id }}"
address: "{{ item.rax_networks.private|first }}"
port: 80
condition: enabled
type: primary
wait: yes
region: IAD
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Configure servers
hosts: web
handlers:
- name: restart nginx
service: name=nginx state=restarted
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
- name: Ensure nginx starts on boot
service: name=nginx state=started enabled=yes
- name: Create custom index.html
copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
owner=root group=root mode=0644
.. _rackconnect_and_manged_cloud:
RackConnect and Managed Cloud
+++++++++++++++++++++++++++++
When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and unusable servers.
These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.
For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.
The RackConnect portions only apply to RackConnect version 2.
.. _using_a_control_machine:
Using a Control Machine
***********************
.. code-block:: yaml
- name: Create an exact count of servers
hosts: localhost
gather_facts: False
tasks:
- name: Server build requests
rax:
credentials: ~/.raxpub
name: web%03d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
region: DFW
state: present
count: 1
exact_count: yes
group: web
wait: yes
register: rax
- name: Add servers to in memory groups
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
rax_id: "{{ item.rax_id }}"
groups: web,new_web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Wait for rackconnect and managed cloud automation to complete
hosts: new_web
gather_facts: false
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Wait for rackconnnect automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
retries: 30
delay: 10
- name: Update new_web hosts with IP that RackConnect assigns
hosts: new_web
gather_facts: false
tasks:
- name: Get facts about servers
rax_facts:
name: "{{ inventory_hostname }}"
region: DFW
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
- name: Base Configure Servers
hosts: web
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull:
Using Ansible Pull
******************
.. code-block:: yaml
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Get region
command: xenstore-read vm-data/provider_data/region
register: rax_region
when: bootstrap.stat.exists != True
- name: Wait for rackconnect automation to complete
uri:
url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
return_content: yes
register: automation_status
when: bootstrap.stat.exists != True
until: automation_status['automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
wait_for:
path: /tmp/rs_managed_cloud_automation_complete
delay: 10
when: bootstrap.stat.exists != True
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull_with_xenstore:
Using Ansible Pull with XenStore
********************************
.. code-block:: yaml
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Wait for rackconnect_automation_status xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
register: rcas_exists
when: bootstrap.stat.exists != True
failed_when: rcas_exists.rc|int > 1
until: rcas_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for rackconnect automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rcas
when: bootstrap.stat.exists != True
until: rcas.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for rax_service_level_automation xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
register: rsla_exists
when: bootstrap.stat.exists != True
failed_when: rsla_exists.rc|int > 1
until: rsla_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rsla
when: bootstrap.stat.exists != True
until: rsla.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _advanced_usage:
Advanced Usage
``````````````
.. _awx_autoscale:
Autoscaling with AWX or Red Hat Ansible Automation Platform
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The GUI component of :ref:`Red Hat Ansible Automation Platform <ansible_tower>` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
to reconfigure ephemeral nodes. See `the documentation on provisioning callbacks <https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks>`_ for more details.
A benefit of using the callback approach over pull mode is that job results are still centrally recorded
and less information has to be shared with remote hosts.
.. _pending_information:
Orchestration in the Rackspace Cloud
++++++++++++++++++++++++++++++++++++
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
* Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,922 |
Docs: scenario guides: Replace yes/no booleans with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/scenario_guides/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 16 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/scenario_guides/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78922
|
https://github.com/ansible/ansible/pull/78934
|
5137cb16e915bd8d0a06bdc659cbc0f65ea9a6b2
|
5b333c9665182e20c2dfbed64006ced12e897ccb
| 2022-09-29T14:10:23Z |
python
| 2022-10-03T20:40:12Z |
docs/docsite/rst/scenario_guides/guide_scaleway.rst
|
.. _guide_scaleway:
**************
Scaleway Guide
**************
.. _scaleway_introduction:
Introduction
============
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules.
Those modules are:
- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- :ref:`scaleway_compute_module`: manages servers on Scaleway. You can use this module to create, restart and delete servers.
- :ref:`scaleway_volume_module`: manages volumes on Scaleway.
.. note::
This guide assumes you are familiar with Ansible and how it works.
If you're not, have a look at :ref:`ansible_documentation` before getting started.
.. _scaleway_requirements:
Requirements
============
The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
To use the modules and inventory script you'll need a Scaleway API token.
You can generate an API token via the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
.. code-block:: bash
$ export SCW_TOKEN=00000000-1111-2222-3333-444444444444
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules using the ``api_token`` argument.
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
.. code-block:: bash
$ ssh-keygen -t rsa -f ./id_rsa
If you want to use an existing key pair, just copy the private and public key over to the playbook directory.
.. _scaleway_add_sshkey:
How to add an SSH key?
======================
Connection to Scaleway Compute nodes use Secure Shell.
SSH keys are stored at the account level, which means that you can re-use the same SSH key in multiple nodes.
The first step to configure Scaleway compute resources is to have at least one SSH key configured.
:ref:`scaleway_sshkey_module` is a module that manages SSH keys on your Scaleway account.
You can add an SSH key to your account by including the following task in a playbook:
.. code-block:: yaml
- name: "Add SSH key"
scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAA..."
state: "present"
The ``ssh_pub_key`` parameter contains your ssh public key as a string. Here is an example inside a playbook:
.. code-block:: yaml
- name: Test SSH key lifecycle on a Scaleway account
hosts: localhost
gather_facts: no
environment:
SCW_API_KEY: ""
tasks:
- scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAAB...424242 [email protected]"
state: present
register: result
- assert:
that:
- result is success and result is changed
.. _scaleway_create_instance:
How to create a compute instance?
=================================
Now that we have an SSH key configured, the next step is to spin up a server!
:ref:`scaleway_compute_module` is a module that can create, update and delete Scaleway compute instances:
.. code-block:: yaml
- name: Create a server
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
Here are the parameter details for the example shown above:
- ``name`` is the name of the instance (the one that will show up in your web console).
- ``image`` is the UUID of the system image you would like to use.
A list of all images is available for each availability zone.
- ``organization`` represents the organization that your account is attached to.
- ``region`` represents the Availability Zone which your instance is in (for this example, par1 and ams1).
- ``commercial_type`` represents the name of the commercial offers.
You can check out the Scaleway pricing page to find which instance is right for you.
Take a look at this short playbook to see a working example using ``scaleway_compute``:
.. code-block:: yaml
- name: Test compute instance lifecycle on a Scaleway account
hosts: localhost
gather_facts: no
environment:
SCW_API_KEY: ""
tasks:
- name: Create a server
register: server_creation_task
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
- debug: var=server_creation_task
- assert:
that:
- server_creation_task is success
- server_creation_task is changed
- name: Run it
scaleway_compute:
name: foobar
state: running
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
tags:
- web_server
register: server_run_task
- debug: var=server_run_task
- assert:
that:
- server_run_task is success
- server_run_task is changed
.. _scaleway_dynamic_inventory_tutorial:
Dynamic Inventory Script
========================
Ansible ships with :ref:`scaleway_inventory`.
You can now get a complete inventory of your Scaleway resources through this plugin and filter it on
different parameters (``regions`` and ``tags`` are currently supported).
Let's create an example!
Suppose that we want to get all hosts that got the tag web_server.
Create a file named ``scaleway_inventory.yml`` with the following content:
.. code-block:: yaml
plugin: scaleway
regions:
- ams1
- par1
tags:
- web_server
This inventory means that we want all hosts that got the tag ``web_server`` on the zones ``ams1`` and ``par1``.
Once you have configured this file, you can get the information using the following command:
.. code-block:: bash
$ ansible-inventory --list -i scaleway_inventory.yml
The output will be:
.. code-block:: yaml
{
"_meta": {
"hostvars": {
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d": {
"ansible_verbosity": 6,
"arch": "x86_64",
"commercial_type": "START1-S",
"hostname": "foobar",
"ipv4": "192.0.2.1",
"organization": "00000000-1111-2222-3333-444444444444",
"state": "running",
"tags": [
"web_server"
]
}
}
},
"all": {
"children": [
"ams1",
"par1",
"ungrouped",
"web_server"
]
},
"ams1": {},
"par1": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
},
"ungrouped": {},
"web_server": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
}
}
As you can see, we get different groups of hosts.
``par1`` and ``ams1`` are groups based on location.
``web_server`` is a group based on a tag.
In case a filter parameter is not defined, the plugin supposes all values possible are wanted.
This means that for each tag that exists on your Scaleway compute nodes, a group based on each tag will be created.
Scaleway S3 object storage
==========================
`Object Storage <https://www.scaleway.com/object-storage>`_ allows you to store any kind of objects (documents, images, videos, and so on).
As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: :ref:`s3_bucket_module`, :ref:`aws_s3_module`.
You can find many examples in the `scaleway_s3 integration tests <https://github.com/ansible/ansible-legacy-tests/tree/devel/test/legacy/roles/scaleway_s3>`_.
.. code-block:: yaml+jinja
- hosts: myserver
vars:
scaleway_region: nl-ams
s3_url: https://s3.nl-ams.scw.cloud
environment:
# AWS_ACCESS_KEY matches your scaleway organization id available at https://cloud.scaleway.com/#/account
AWS_ACCESS_KEY: 00000000-1111-2222-3333-444444444444
# AWS_SECRET_KEY matches a secret token that you can retrieve at https://cloud.scaleway.com/#/credentials
AWS_SECRET_KEY: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
module_defaults:
group/aws:
s3_url: '{{ s3_url }}'
region: '{{ scaleway_region }}'
tasks:
# use a fact instead of a variable, otherwise template is evaluate each time variable is used
- set_fact:
bucket_name: "{{ 99999999 | random | to_uuid }}"
# "requester_pays:" is mandatory because Scaleway doesn't implement related API
# another way is to use aws_s3 and "mode: create" !
- s3_bucket:
name: '{{ bucket_name }}'
requester_pays:
- name: Another way to create the bucket
aws_s3:
bucket: '{{ bucket_name }}'
mode: create
encrypt: false
register: bucket_creation_check
- name: add something in the bucket
aws_s3:
mode: put
bucket: '{{ bucket_name }}'
src: /tmp/test.txt # needs to be created before
object: test.txt
encrypt: false # server side encryption must be disabled
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,924 |
Docs: reference_appendices: replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/reference_appendices/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle thse changes. It should impact 7 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/reference_appendices/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/faq.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
no
```
### OS / Environment
no
### Additional Information
no
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78924
|
https://github.com/ansible/ansible/pull/78950
|
fbd98cd8246e8554269b2c766db2b2228cc30bd9
|
78c9fb415954ca630f028fe7a7d154658fc41422
| 2022-09-29T14:27:08Z |
python
| 2022-10-03T20:49:43Z |
docs/docsite/rst/reference_appendices/YAMLSyntax.rst
|
.. _yaml_syntax:
YAML Syntax
===========
This page provides a basic overview of correct YAML syntax, which is how Ansible
playbooks (our configuration management language) are expressed.
We use YAML because it is easier for humans to read and write than other common
data formats like XML or JSON. Further, there are libraries available in most
programming languages for working with YAML.
You may also wish to read :ref:`working_with_playbooks` at the same time to see how this
is used in practice.
YAML Basics
-----------
For Ansible, nearly every YAML file starts with a list.
Each item in the list is a list of key/value pairs, commonly
called a "hash" or a "dictionary". So, we need to know how
to write lists and dictionaries in YAML.
There's another small quirk to YAML. All YAML files (regardless of their association with Ansible or not) can optionally
begin with ``---`` and end with ``...``. This is part of the YAML format and indicates the start and end of a document.
All members of a list are lines beginning at the same indentation level starting with a ``"- "`` (a dash and a space)::
---
# A list of tasty fruits
- Apple
- Orange
- Strawberry
- Mango
...
A dictionary is represented in a simple ``key: value`` form (the colon must be followed by a space)::
# An employee record
martin:
name: Martin D'vloper
job: Developer
skill: Elite
More complicated data structures are possible, such as lists of dictionaries, dictionaries whose values are lists or a mix of both::
# Employee records
- martin:
name: Martin D'vloper
job: Developer
skills:
- python
- perl
- pascal
- tabitha:
name: Tabitha Bitumen
job: Developer
skills:
- lisp
- fortran
- erlang
Dictionaries and lists can also be represented in an abbreviated form if you really want to::
---
martin: {name: Martin D'vloper, job: Developer, skill: Elite}
fruits: ['Apple', 'Orange', 'Strawberry', 'Mango']
These are called "Flow collections".
.. _truthiness:
Ansible doesn't really use these too much, but you can also specify a :ref:`boolean value <playbooks_variables>` (true/false) in several forms::
create_key: yes
needs_agent: no
knows_oop: True
likes_emacs: TRUE
uses_cvs: false
Use lowercase 'true' or 'false' for boolean values in dictionaries if you want to be compatible with default yamllint options.
Values can span multiple lines using ``|`` or ``>``. Spanning multiple lines using a "Literal Block Scalar" ``|`` will include the newlines and any trailing spaces.
Using a "Folded Block Scalar" ``>`` will fold newlines to spaces; it's used to make what would otherwise be a very long line easier to read and edit.
In either case the indentation will be ignored.
Examples are::
include_newlines: |
exactly as you see
will appear these three
lines of poetry
fold_newlines: >
this is really a
single line of text
despite appearances
While in the above ``>`` example all newlines are folded into spaces, there are two ways to enforce a newline to be kept::
fold_some_newlines: >
a
b
c
d
e
f
Alternatively, it can be enforced by including newline ``\n`` characters::
fold_same_newlines: "a b\nc d\n e\nf\n"
Let's combine what we learned so far in an arbitrary YAML example.
This really has nothing to do with Ansible, but will give you a feel for the format::
---
# An employee record
name: Martin D'vloper
job: Developer
skill: Elite
employed: True
foods:
- Apple
- Orange
- Strawberry
- Mango
languages:
perl: Elite
python: Elite
pascal: Lame
education: |
4 GCSEs
3 A-Levels
BSc in the Internet of Things
That's all you really need to know about YAML to start writing `Ansible` playbooks.
Gotchas
-------
While you can put just about anything into an unquoted scalar, there are some exceptions.
A colon followed by a space (or newline) ``": "`` is an indicator for a mapping.
A space followed by the pound sign ``" #"`` starts a comment.
Because of this, the following is going to result in a YAML syntax error::
foo: somebody said I should put a colon here: so I did
windows_drive: c:
...but this will work::
windows_path: c:\windows
You will want to quote hash values using colons followed by a space or the end of the line::
foo: 'somebody said I should put a colon here: so I did'
windows_drive: 'c:'
...and then the colon will be preserved.
Alternatively, you can use double quotes::
foo: "somebody said I should put a colon here: so I did"
windows_drive: "c:"
The difference between single quotes and double quotes is that in double quotes
you can use escapes::
foo: "a \t TAB and a \n NEWLINE"
The list of allowed escapes can be found in the YAML Specification under "Escape Sequences" (YAML 1.1) or "Escape Characters" (YAML 1.2).
The following is invalid YAML:
.. code-block:: text
foo: "an escaped \' single quote"
Further, Ansible uses "{{ var }}" for variables. If a value after a colon starts
with a "{", YAML will think it is a dictionary, so you must quote it, like so::
foo: "{{ variable }}"
If your value starts with a quote the entire value must be quoted, not just part of it. Here are some additional examples of how to properly quote things::
foo: "{{ variable }}/additional/string/literal"
foo2: "{{ variable }}\\backslashes\\are\\also\\special\\characters"
foo3: "even if it's just a string literal it must all be quoted"
Not valid::
foo: "E:\\path\\"rest\\of\\path
In addition to ``'`` and ``"`` there are a number of characters that are special (or reserved) and cannot be used
as the first character of an unquoted scalar: ``[] {} > | * & ! % # ` @ ,``.
You should also be aware of ``? : -``. In YAML, they are allowed at the beginning of a string if a non-space
character follows, but YAML processor implementations differ, so it's better to use quotes.
In Flow Collections, the rules are a bit more strict::
a scalar in block mapping: this } is [ all , valid
flow mapping: { key: "you { should [ use , quotes here" }
Boolean conversion is helpful, but this can be a problem when you want a literal `yes` or other boolean values as a string.
In these cases just use quotes::
non_boolean: "yes"
other_string: "False"
YAML converts certain strings into floating-point values, such as the string
`1.0`. If you need to specify a version number (in a requirements.yml file, for
example), you will need to quote the value if it looks like a floating-point
value::
version: "1.0"
.. seealso::
:ref:`working_with_playbooks`
Learn what playbooks can do and how to write/run them.
`YAMLLint <http://yamllint.com/>`_
YAML Lint (online) helps you debug YAML syntax if you are having problems
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the github project source
`Wikipedia YAML syntax reference <https://en.wikipedia.org/wiki/YAML>`_
A good guide to YAML syntax
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
:ref:`communication_irc`
How to join Ansible chat channels (join #yaml for yaml-specific questions)
`YAML 1.1 Specification <https://yaml.org/spec/1.1/>`_
The Specification for YAML 1.1, which PyYAML and libyaml are currently
implementing
`YAML 1.2 Specification <https://yaml.org/spec/1.2/spec.html>`_
For completeness, YAML 1.2 is the successor of 1.1
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,924 |
Docs: reference_appendices: replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/reference_appendices/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle thse changes. It should impact 7 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/reference_appendices/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/faq.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
no
```
### OS / Environment
no
### Additional Information
no
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78924
|
https://github.com/ansible/ansible/pull/78950
|
fbd98cd8246e8554269b2c766db2b2228cc30bd9
|
78c9fb415954ca630f028fe7a7d154658fc41422
| 2022-09-29T14:27:08Z |
python
| 2022-10-03T20:49:43Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _collections_transition:
Where did all the modules go?
+++++++++++++++++++++++++++++
In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10).
* To learn more about using collections, see :ref:`collections`.
* To learn more about developing collections, see :ref:`developing_collections`.
* To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections.
.. _find_my_module:
Where did this specific module go?
++++++++++++++++++++++++++++++++++
IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`.
.. _slow_install:
How can I speed up Ansible on systems with slow disks?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible may feel sluggish on systems with slow disks, such as Raspberry PI. See `Ansible might be running slow if libyaml is not available <https://www.jeffgeerling.com/blog/2021/ansible-might-be-running-slow-if-libyaml-not-available>`_ for hints on how to improve this.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire play?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play.
.. code-block:: yaml
shell:
cmd: date
environment:
LANG=fr_FR.UTF-8
.. code-block:: yaml
hosts: servers
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a ``ProxyCommand`` in the
``ansible_ssh_common_args`` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group ``gatewayed``. (These arguments are used
in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to
repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.)
Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute ``nc %h:%p`` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``,
or globally by setting ``ssh_args`` in ``ansible.cfg``.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option,
SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval``
into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that
``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _cloud_provider_performance:
How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of machines of a cloud provider from your laptop.
Rather connect to a management node inside this cloud provider first and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter via
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python via
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using
the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available via pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on macOS
----------------
When executing Ansible on a system with macOS as a controller machine one might encounter the following error:
.. error::
+[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
ERROR! A worker was found in a dead state
In general the recommended workaround is to set the following environment variable in your shell:
.. code-block:: shell
$ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set::
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
Running under fakeroot
----------------------
Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default.
It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`).
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or via environment variable) you can set::
remote_tmp=$HOME/.ansible/tmp
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1``
in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module
as an ad hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe
the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question
if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname``
is a magic variable that indicates the current host you are looping over in the host loop.
In the example above, if your interface names have dashes, you must replace them with underscores:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }}
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables,
they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
- set_fact: headnode={{ groups['webservers'][0] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient
for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available via facts in the ``ansible_env`` variable:
.. code-block:: jinja
{{ ansible_env.HOME }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment`
in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the
:ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>`
modules to introduce environment variables into files. The exact files to edit vary depending on your OS
and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The ``mkpasswd`` utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (for example, you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called ``encrypt (1)``
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
.. code-block:: jinja
item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries
item['update'] # this works
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence
found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take
precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding
with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services
and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our
`webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / GUI?
++++++++++++++++++++++++++++++++++++++++++++
Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`.
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play::
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself via
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without
specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' ~ other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' ~ other_var) }}
To determine if a keyword requires ``{{ }}`` or even supports templating, use ``ansible-doc -t keyword <name>``,
this will return documentation on the keyword including a ``template`` field with the values ``explicit`` (requires ``{{ }}``),
``implicit`` (assumes ``{{ }}``, so no needed) or ``static`` (no templating supported, all characters will be interpreted literally)
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original via ``hostvars``::
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _mfa_support:
Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction.
As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction.
Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also
tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or
a long set of tasks.
In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures.
With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution.
.. _complex_configuration_validation:
The 'validate' option is not enough for my needs, what do I do?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails.
This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools
for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature.
For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue
and backups, which most file based modules also support:
.. code-block:: yaml
- name: update config and backout if validation fails
block:
- name: do the actual update, works with copy, lineinfile and any action that allows for `backup`.
template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff
register: updated
- name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise.
shell: run_validation_commmand
become: yes
become_user: requiredbyapp
environment:
WEIRD_REQUIREMENT: 1
rescue:
- name: restore backup file to original, in the hope the previous configuration was working.
copy:
remote_src: yes
dest: /x/y/z
src: "{{ updated['backup_file'] }}"
always:
- name: We choose to always delete backup, but could copy or move, or only delete in rescue.
file:
path: "{{ updated['backup_file'] }}"
state: absent
.. _jinja2_faqs:
Why does the ``regex_search`` filter return `None` instead of an empty string?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Until the jinja2 2.10 release, Jinja was only able to return strings, but Ansible needed Python objects in some cases. Ansible uses ``safe_eval`` and only sends strings that look like certain types of Python objects through this function. With ``regex_search`` that does not find a match, the result (``None``) is converted to the string "None" which is not useful in non-native jinja2.
The following example of a single templating action shows this behavior:
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') }}
This example does not result in a Python ``None``, so Ansible historically converted it to "" (empty string).
The native jinja2 functionality actually allows us to return full Python objects, that are always represented as Python objects everywhere, and as such the result of a single templating action with ``regex_search`` can result in the Python ``None``.
.. note::
Native jinja2 functionality is not needed when ``regex_search`` is used as an intermediate result that is then compared to the jinja2 ``none`` test.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') is none }}
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Documentation for Ansible is kept in the main project git repository, and complete instructions
for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _legacy_vs_builtin:
What is the difference between ``ansible.legacy`` and ``ansible.builtin`` collections?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neither is a real collection. They are virtually constructed by the core engine (synthetic collections).
The ``ansible.builtin`` collection only refers to plugins that ship with ``ansible-core``.
The ``ansible.legacy`` collection is a superset of ``ansible.builtin`` (you can reference the plugins from builtin through ``ansible.legacy``). You also get the ability to
add 'custom' plugins in the :ref:`configured paths and adjacent directories <ansible_search_path>`, with the ability to override the builtin plugins that have the same name.
Also, ``ansible.legacy`` is what you get by default when you do not specify an FQCN.
So this:
.. code-block:: yaml
- shell: echo hi
Is really equivalent to:
.. code-block:: yaml
- ansible.legacy.shell: echo hi
Though, if you do not override the ``shell`` module, you can also just write it as ``ansible.builtin.shell``, since legacy will resolve to the builtin collection.
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,924 |
Docs: reference_appendices: replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/reference_appendices/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle thse changes. It should impact 7 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/reference_appendices/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/faq.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
no
```
### OS / Environment
no
### Additional Information
no
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78924
|
https://github.com/ansible/ansible/pull/78950
|
fbd98cd8246e8554269b2c766db2b2228cc30bd9
|
78c9fb415954ca630f028fe7a7d154658fc41422
| 2022-09-29T14:27:08Z |
python
| 2022-10-03T20:49:43Z |
docs/docsite/rst/reference_appendices/general_precedence.rst
|
.. _general_precedence_rules:
Controlling how Ansible behaves: precedence rules
=================================================
To give you maximum flexibility in managing your environments, Ansible offers many ways to control how Ansible behaves: how it connects to managed nodes, how it works once it has connected.
If you use Ansible to manage a large number of servers, network devices, and cloud resources, you may define Ansible behavior in several different places and pass that information to Ansible in several different ways.
This flexibility is convenient, but it can backfire if you do not understand the precedence rules.
These precedence rules apply to any setting that can be defined in multiple ways (by configuration settings, command-line options, playbook keywords, variables).
.. contents::
:local:
Precedence categories
---------------------
Ansible offers four sources for controlling its behavior. In order of precedence from lowest (most easily overridden) to highest (overrides all others), the categories are:
* Configuration settings
* Command-line options
* Playbook keywords
* Variables
Each category overrides any information from all lower-precedence categories. For example, a playbook keyword will override any configuration setting.
Within each precedence category, specific rules apply. However, generally speaking, 'last defined' wins and overrides any previous definitions.
Configuration settings
^^^^^^^^^^^^^^^^^^^^^^
:ref:`Configuration settings<ansible_configuration_settings>` include both values from the ``ansible.cfg`` file and environment variables. Within this category, values set in configuration files have lower precedence. Ansible uses the first ``ansible.cfg`` file it finds, ignoring all others. Ansible searches for ``ansible.cfg`` in these locations in order:
* ``ANSIBLE_CONFIG`` (environment variable if set)
* ``ansible.cfg`` (in the current directory)
* ``~/.ansible.cfg`` (in the home directory)
* ``/etc/ansible/ansible.cfg``
Environment variables have a higher precedence than entries in ``ansible.cfg``. If you have environment variables set on your control node, they override the settings in whichever ``ansible.cfg`` file Ansible loads. The value of any given environment variable follows normal shell precedence: the last value defined overwrites previous values.
Command-line options
^^^^^^^^^^^^^^^^^^^^
Any command-line option will override any configuration setting.
When you type something directly at the command line, you may feel that your hand-crafted values should override all others, but Ansible does not work that way. Command-line options have low precedence - they override configuration only. They do not override playbook keywords, variables from inventory or variables from playbooks.
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``::
ansible -u mike -m ping myhost -u carol
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2::
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
The help for each :ref:`command-line tool<command_line_tools>` lists available options for that tool.
Playbook keywords
^^^^^^^^^^^^^^^^^
Any :ref:`playbook keyword<playbook_keywords>` will override any command-line option and any configuration setting.
Within playbook keywords, precedence flows with the playbook itself; the more specific wins against the more general:
- play (most general)
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- tasks (most specific)
A simple example::
- hosts: all
connection: ssh
tasks:
- name: This task uses ssh.
ping:
- name: This task uses paramiko.
connection: paramiko
ping:
In this example, the ``connection`` keyword is set to ``ssh`` at the play level. The first task inherits that value, and connects using ``ssh``. The second task inherits that value, overrides it, and connects using ``paramiko``.
The same logic applies to blocks and roles as well. All tasks, blocks, and roles within a play inherit play-level keywords; any task, block, or role can override any keyword by defining a different value for that keyword within the task, block, or role.
Remember that these are KEYWORDS, not variables. Both playbooks and variable files are defined in YAML but they have different significance.
Playbooks are the command or 'state description' structure for Ansible, variables are data we use to help make playbooks more dynamic.
.. _general_precedence_variables:
Variables
^^^^^^^^^
Any variable will override any playbook keyword, any command-line option, and any configuration setting.
Variables that have equivalent playbook keywords, command-line options, and configuration settings are known as :ref:`connection_variables`. Originally designed for connection parameters, this category has expanded to include other core variables like the temporary directory and the python interpreter.
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role::
- hosts: cloud
gather_facts: false
become: yes
vars:
ansible_become_user: admin
tasks:
- name: This task uses admin as the become user.
dnf:
name: some-service
state: latest
- block:
- name: This task uses service-admin as the become user.
# a task to configure the new service
- name: This task also uses service-admin as the become user, defined in the block.
# second task to configure the service
vars:
ansible_become_user: service-admin
- name: This task (outside of the block) uses admin as the become user again.
service:
name: some-service
state: restarted
Variable scope: how long is a value available?
""""""""""""""""""""""""""""""""""""""""""""""
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available via the ``hostvars[]`` dictionary.
.. _general_precedence_extra_vars:
Using ``-e`` extra variables at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``::
ansible -u carol -e 'ansible_user=brian' -a whoami all
You must specify both the variable name and the value with ``--extra-vars``.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,924 |
Docs: reference_appendices: replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes in the `docs/docsite/rst/reference_appendices/` files.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle thse changes. It should impact 7 files. NOTE - ansibot does not like PRs over 50 files.
The following grep will help you find these occurrences from the `docs/docsite/rst/reference_appendices/` directory:
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/faq.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
no
```
### OS / Environment
no
### Additional Information
no
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78924
|
https://github.com/ansible/ansible/pull/78950
|
fbd98cd8246e8554269b2c766db2b2228cc30bd9
|
78c9fb415954ca630f028fe7a7d154658fc41422
| 2022-09-29T14:27:08Z |
python
| 2022-10-03T20:49:43Z |
docs/docsite/rst/reference_appendices/test_strategies.rst
|
.. _testing_strategies:
Testing Strategies
==================
.. _testing_intro:
Integrating Testing With Ansible Playbooks
``````````````````````````````````````````
Many times, people ask, "how can I best integrate testing with Ansible playbooks?" There are many options. Ansible is actually designed
to be a "fail-fast" and ordered system, therefore it makes it easy to embed testing directly in Ansible playbooks. In this chapter,
we'll go into some patterns for integrating tests of infrastructure and discuss the right level of testing that may be appropriate.
.. note:: This is a chapter about testing the application you are deploying, not the chapter on how to test Ansible modules during development. For that content, please hop over to the Development section.
By incorporating a degree of testing into your deployment workflow, there will be fewer surprises when code hits production and, in many cases,
tests can be used in production to prevent failed updates from migrating across an entire installation. Since it's push-based, it's
also very easy to run the steps on the localhost or testing servers. Ansible lets you insert as many checks and balances into your upgrade workflow as you would like to have.
The Right Level of Testing
``````````````````````````
Ansible resources are models of desired-state. As such, it should not be necessary to test that services are started, packages are
installed, or other such things. Ansible is the system that will ensure these things are declaratively true. Instead, assert these
things in your playbooks.
.. code-block:: yaml
tasks:
- ansible.builtin.service:
name: foo
state: started
enabled: yes
If you think the service may not be started, the best thing to do is request it to be started. If the service fails to start, Ansible
will yell appropriately. (This should not be confused with whether the service is doing something functional, which we'll show more about how to
do later).
.. _check_mode_drift:
Check Mode As A Drift Test
``````````````````````````
In the above setup, ``--check`` mode in Ansible can be used as a layer of testing as well. If running a deployment playbook against an
existing system, using the ``--check`` flag to the `ansible` command will report if Ansible thinks it would have had to have made any changes to
bring the system into a desired state.
This can let you know up front if there is any need to deploy onto the given system. Ordinarily, scripts and commands don't run in check mode, so if you
want certain steps to execute in normal mode even when the ``--check`` flag is used, such as calls to the script module, disable check mode for those tasks::
roles:
- webserver
tasks:
- ansible.builtin.script: verify.sh
check_mode: no
Modules That Are Useful for Testing
```````````````````````````````````
Certain playbook modules are particularly good for testing. Below is an example that ensures a port is open::
tasks:
- ansible.builtin.wait_for:
host: "{{ inventory_hostname }}"
port: 22
delegate_to: localhost
Here's an example of using the URI module to make sure a web service returns::
tasks:
- action: uri url=https://www.example.com return_content=yes
register: webpage
- fail:
msg: 'service is not happy'
when: "'AWESOME' not in webpage.content"
It's easy to push an arbitrary script (in any language) on a remote host and the script will automatically fail if it has a non-zero return code::
tasks:
- ansible.builtin.script: test_script1
- ansible.builtin.script: test_script2 --parameter value --parameter2 value
If using roles (you should be, roles are great!), scripts pushed by the script module can live in the 'files/' directory of a role.
And the assert module makes it very easy to validate various kinds of truth::
tasks:
- ansible.builtin.shell: /usr/bin/some-command --parameter value
register: cmd_result
- ansible.builtin.assert:
that:
- "'not ready' not in cmd_result.stderr"
- "'gizmo enabled' in cmd_result.stdout"
Should you feel the need to test for the existence of files that are not declaratively set by your Ansible configuration, the 'stat' module is a great choice::
tasks:
- ansible.builtin.stat:
path: /path/to/something
register: p
- ansible.builtin.assert:
that:
- p.stat.exists and p.stat.isdir
As mentioned above, there's no need to check things like the return codes of commands. Ansible is checking them automatically.
Rather than checking for a user to exist, consider using the user module to make it exist.
Ansible is a fail-fast system, so when there is an error creating that user, it will stop the playbook run. You do not have
to check up behind it.
Testing Lifecycle
`````````````````
If writing some degree of basic validation of your application into your playbooks, they will run every time you deploy.
As such, deploying into a local development VM and a staging environment will both validate that things are according to plan
ahead of your production deploy.
Your workflow may be something like this::
- Use the same playbook all the time with embedded tests in development
- Use the playbook to deploy to a staging environment (with the same playbooks) that simulates production
- Run an integration test battery written by your QA team against staging
- Deploy to production, with the same integrated tests.
Something like an integration test battery should be written by your QA team if you are a production webservice. This would include
things like Selenium tests or automated API tests and would usually not be something embedded into your Ansible playbooks.
However, it does make sense to include some basic health checks into your playbooks, and in some cases it may be possible to run
a subset of the QA battery against remote nodes. This is what the next section covers.
Integrating Testing With Rolling Updates
````````````````````````````````````````
If you have read into :ref:`playbooks_delegation` it may quickly become apparent that the rolling update pattern can be extended, and you
can use the success or failure of the playbook run to decide whether to add a machine into a load balancer or not.
This is the great culmination of embedded tests::
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
- apply_testing_checks
post_tasks:
- name: add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
Of course in the above, the "take out of the pool" and "add back" steps would be replaced with a call to an Ansible load balancer
module or appropriate shell command. You might also have steps that use a monitoring module to start and end an outage window
for the machine.
However, what you can see from the above is that tests are used as a gate -- if the "apply_testing_checks" step is not performed,
the machine will not go back into the pool.
Read the delegation chapter about "max_fail_percentage" and you can also control how many failing tests will stop a rolling update
from proceeding.
This above approach can also be modified to run a step from a testing machine remotely against a machine::
---
- hosts: webservers
serial: 5
pre_tasks:
- name: take out of load balancer pool
ansible.builtin.command: /usr/bin/take_out_of_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
roles:
- common
- webserver
tasks:
- ansible.builtin.script: /srv/qa_team/app_testing_script.sh --server {{ inventory_hostname }}
delegate_to: testing_server
post_tasks:
- name: add back to load balancer pool
ansible.builtin.command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1
In the above example, a script is run from the testing server against a remote node prior to bringing it back into
the pool.
In the event of a problem, fix the few servers that fail using Ansible's automatically generated
retry file to repeat the deploy on just those servers.
Achieving Continuous Deployment
```````````````````````````````
If desired, the above techniques may be extended to enable continuous deployment practices.
The workflow may look like this::
- Write and use automation to deploy local development VMs
- Have a CI system like Jenkins deploy to a staging environment on every code change
- The deploy job calls testing scripts to pass/fail a build on every deploy
- If the deploy job succeeds, it runs the same deploy playbook against production inventory
Some Ansible users use the above approach to deploy a half-dozen or dozen times an hour without taking all of their infrastructure
offline. A culture of automated QA is vital if you wish to get to this level.
If you are still doing a large amount of manual QA, you should still make the decision on whether to deploy manually as well, but
it can still help to work in the rolling update patterns of the previous section and incorporate some basic health checks using
modules like 'script', 'stat', 'uri', and 'assert'.
Conclusion
``````````
Ansible believes you should not need another framework to validate basic things of your infrastructure is true. This is the case
because Ansible is an order-based system that will fail immediately on unhandled errors for a host, and prevent further configuration
of that host. This forces errors to the top and shows them in a summary at the end of the Ansible run.
However, as Ansible is designed as a multi-tier orchestration system, it makes it very easy to incorporate tests into the end of
a playbook run, either using loose tasks or roles. When used with rolling updates, testing steps can decide whether to put a machine
back into a load balanced pool or not.
Finally, because Ansible errors propagate all the way up to the return code of the Ansible program itself, and Ansible by default
runs in an easy push-based mode, Ansible is a great step to put into a build environment if you wish to use it to roll out systems
as part of a Continuous Integration/Continuous Delivery pipeline, as is covered in sections above.
The focus should not be on infrastructure testing, but on application testing, so we strongly encourage getting together with your
QA team and ask what sort of tests would make sense to run every time you deploy development VMs, and which sort of tests they would like
to run against the staging environment on every deploy. Obviously at the development stage, unit tests are great too. But don't unit
test your playbook. Ansible describes states of resources declaratively, so you don't have to. If there are cases where you want
to be sure of something though, that's great, and things like stat/assert are great go-to modules for that purpose.
In all, testing is a very organizational and site-specific thing. Everybody should be doing it, but what makes the most sense for your
environment will vary with what you are deploying and who is using it -- but everyone benefits from a more robust and reliable deployment
system.
.. seealso::
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_delegation`
Delegation, useful for working with load balancers, clouds, and locally executed steps.
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,003 |
Docs: Replace latin terms with english in the os_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/os_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/os_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/os_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79003
|
https://github.com/ansible/ansible/pull/79007
|
78c9fb415954ca630f028fe7a7d154658fc41422
|
55925958ea8ea48273c7ade660ceea0e9e24f348
| 2022-10-03T20:11:55Z |
python
| 2022-10-03T20:51:44Z |
docs/docsite/rst/os_guide/windows_faq.rst
|
.. _windows_faq:
Windows Frequently Asked Questions
==================================
Here are some commonly asked questions in regards to Ansible and Windows and
their answers.
.. note:: This document covers questions about managing Microsoft Windows servers with Ansible.
For questions about Ansible Core, please see the
:ref:`general FAQ page <ansible_faq>`.
Does Ansible work with Windows XP or Server 2003?
``````````````````````````````````````````````````
Ansible does not work with Windows XP or Server 2003 hosts. Ansible does work with these Windows operating system versions:
* Windows Server 2008 :sup:`1`
* Windows Server 2008 R2 :sup:`1`
* Windows Server 2012
* Windows Server 2012 R2
* Windows Server 2016
* Windows Server 2019
* Windows 7 :sup:`1`
* Windows 8.1
* Windows 10
1 - See the :ref:`Server 2008 FAQ <windows_faq_server2008>` entry for more details.
Ansible also has minimum PowerShell version requirements - please see
:ref:`windows_setup` for the latest information.
.. _windows_faq_server2008:
Are Server 2008, 2008 R2 and Windows 7 supported?
`````````````````````````````````````````````````
Microsoft ended Extended Support for these versions of Windows on January 14th, 2020, and Ansible deprecated official support in the 2.10 release. No new feature development will occur targeting these operating systems, and automated testing has ceased. However, existing modules and features will likely continue to work, and simple pull requests to resolve issues with these Windows versions may be accepted.
Can I manage Windows Nano Server with Ansible?
``````````````````````````````````````````````
Ansible does not currently work with Windows Nano Server, since it does
not have access to the full .NET Framework that is used by the majority of the
modules and internal components.
.. _windows_faq_ansible:
Can Ansible run on Windows?
```````````````````````````
No, Ansible can only manage Windows hosts. Ansible cannot run on a Windows host
natively, though it can run under the Windows Subsystem for Linux (WSL).
.. note:: The Windows Subsystem for Linux is not supported by Ansible and
should not be used for production systems.
To install Ansible on WSL, the following commands
can be run in the bash terminal:
.. code-block:: shell
sudo apt-get update
sudo apt-get install python-pip git libffi-dev libssl-dev -y
pip install --user ansible pywinrm
To run Ansible from source instead of a release on the WSL, simply uninstall the pip
installed version and then clone the git repo.
.. code-block:: shell
pip uninstall ansible -y
git clone https://github.com/ansible/ansible.git
source ansible/hacking/env-setup
# To enable Ansible on login, run the following
echo ". ~/ansible/hacking/env-setup -q' >> ~/.bashrc
If you encounter timeout errors when running Ansible on the WSL, this may be due to an issue
with ``sleep`` not returning correctly. The following workaround may resolve the issue:
.. code-block:: shell
mv /usr/bin/sleep /usr/bin/sleep.orig
ln -s /bin/true /usr/bin/sleep
Another option is to use WSL 2 if running Windows 10 later than build 2004.
.. code-block:: shell
wsl --set-default-version 2
Can I use SSH keys to authenticate to Windows hosts?
````````````````````````````````````````````````````
You cannot use SSH keys with the WinRM or PSRP connection plugins.
These connection plugins use X509 certificates for authentication instead
of the SSH key pairs that SSH uses.
The way X509 certificates are generated and mapped to a user is different
from the SSH implementation; consult the :ref:`windows_winrm` documentation for
more information.
Ansible 2.8 has added an experimental option to use the SSH connection plugin,
which uses SSH keys for authentication, for Windows servers. See :ref:`this question <windows_faq_ssh>`
for more information.
.. _windows_faq_winrm:
Why can I run a command locally that does not work under Ansible?
`````````````````````````````````````````````````````````````````
Ansible executes commands through WinRM. These processes are different from
running a command locally in these ways:
* Unless using an authentication option like CredSSP or Kerberos with
credential delegation, the WinRM process does not have the ability to
delegate the user's credentials to a network resource, causing ``Access is
Denied`` errors.
* All processes run under WinRM are in a non-interactive session. Applications
that require an interactive session will not work.
* When running through WinRM, Windows restricts access to internal Windows
APIs like the Windows Update API and DPAPI, which some installers and
programs rely on.
Some ways to bypass these restrictions are to:
* Use ``become``, which runs a command as it would when run locally. This will
bypass most WinRM restrictions, as Windows is unaware the process is running
under WinRM when ``become`` is used. See the :ref:`become` documentation for more
information.
* Use a scheduled task, which can be created with ``win_scheduled_task``. Like
``become``, it will bypass all WinRM restrictions, but it can only be used to run
commands, not modules.
* Use ``win_psexec`` to run a command on the host. PSExec does not use WinRM
and so will bypass any of the restrictions.
* To access network resources without any of these workarounds, you can use
CredSSP or Kerberos with credential delegation enabled.
See :ref:`become` more info on how to use become. The limitations section at
:ref:`windows_winrm` has more details around WinRM limitations.
This program won't install on Windows with Ansible
``````````````````````````````````````````````````
See :ref:`this question <windows_faq_winrm>` for more information about WinRM limitations.
What Windows modules are available?
```````````````````````````````````
Most of the Ansible modules in Ansible Core are written for a combination of
Linux/Unix machines and arbitrary web services. These modules are written in
Python and most of them do not work on Windows.
Because of this, there are dedicated Windows modules that are written in
PowerShell and are meant to be run on Windows hosts. A list of these modules
can be found :ref:`here <windows_modules>`.
In addition, the following Ansible Core modules/action-plugins work with Windows:
* add_host
* assert
* async_status
* debug
* fail
* fetch
* group_by
* include
* include_role
* include_vars
* meta
* pause
* raw
* script
* set_fact
* set_stats
* setup
* slurp
* template (also: win_template)
* wait_for_connection
Ansible Windows modules exist in the :ref:`plugins_in_ansible.windows`, :ref:`plugins_in_community.windows`, and :ref:`plugins_in_chocolatey.chocolatey` collections.
Can I run Python modules on Windows hosts?
``````````````````````````````````````````
No, the WinRM connection protocol is set to use PowerShell modules, so Python
modules will not work. A way to bypass this issue to use
``delegate_to: localhost`` to run a Python module on the Ansible controller.
This is useful if during a playbook, an external service needs to be contacted
and there is no equivalent Windows module available.
.. _windows_faq_ssh:
Can I connect to Windows hosts over SSH?
````````````````````````````````````````
Ansible 2.8 has added an experimental option to use the SSH connection plugin
to manage Windows hosts. To connect to Windows hosts over SSH, you must install and configure the `Win32-OpenSSH <https://github.com/PowerShell/Win32-OpenSSH>`_
fork that is in development with Microsoft on
the Windows host(s). While most of the basics should work with SSH,
``Win32-OpenSSH`` is rapidly changing, with new features added and bugs
fixed in every release. It is highly recommend you `install <https://github.com/PowerShell/Win32-OpenSSH/wiki/Install-Win32-OpenSSH>`_ the latest release
of ``Win32-OpenSSH`` from the GitHub Releases page when using it with Ansible
on Windows hosts.
To use SSH as the connection to a Windows host, set the following variables in
the inventory:
.. code-block:: shell
ansible_connection=ssh
# Set either cmd or powershell not both
ansible_shell_type=cmd
# ansible_shell_type=powershell
The value for ``ansible_shell_type`` should either be ``cmd`` or ``powershell``.
Use ``cmd`` if the ``DefaultShell`` has not been configured on the SSH service
and ``powershell`` if that has been set as the ``DefaultShell``.
Why is connecting to a Windows host via SSH failing?
````````````````````````````````````````````````````
Unless you are using ``Win32-OpenSSH`` as described above, you must connect to
Windows hosts using :ref:`windows_winrm`. If your Ansible output indicates that
SSH was used, either you did not set the connection vars properly or the host is not inheriting them correctly.
Make sure ``ansible_connection: winrm`` is set in the inventory for the Windows
host(s).
Why are my credentials being rejected?
``````````````````````````````````````
This can be due to a myriad of reasons unrelated to incorrect credentials.
See HTTP 401/Credentials Rejected at :ref:`windows_setup` for a more detailed
guide of this could mean.
Why am I getting an error SSL CERTIFICATE_VERIFY_FAILED?
````````````````````````````````````````````````````````
When the Ansible controller is running on Python 2.7.9+ or an older version of Python that
has backported SSLContext (like Python 2.7.5 on RHEL 7), the controller will attempt to
validate the certificate WinRM is using for an HTTPS connection. If the
certificate cannot be validated (such as in the case of a self signed cert), it will
fail the verification process.
To ignore certificate validation, add
``ansible_winrm_server_cert_validation: ignore`` to inventory for the Windows
host.
.. seealso::
:ref:`windows`
The Windows documentation index
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,003 |
Docs: Replace latin terms with english in the os_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/os_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/os_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/os_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79003
|
https://github.com/ansible/ansible/pull/79007
|
78c9fb415954ca630f028fe7a7d154658fc41422
|
55925958ea8ea48273c7ade660ceea0e9e24f348
| 2022-10-03T20:11:55Z |
python
| 2022-10-03T20:51:44Z |
docs/docsite/rst/os_guide/windows_performance.rst
|
.. _windows_performance:
Windows performance
===================
This document offers some performance optimizations you might like to apply to
your Windows hosts to speed them up specifically in the context of using Ansible
with them, and generally.
Optimize PowerShell performance to reduce Ansible task overhead
---------------------------------------------------------------
To speed up the startup of PowerShell by around 10x, run the following
PowerShell snippet in an Administrator session. Expect it to take tens of
seconds.
.. note::
If native images have already been created by the ngen task or service, you
will observe no difference in performance (but this snippet will at that
point execute faster than otherwise).
.. code-block:: powershell
function Optimize-PowershellAssemblies {
# NGEN powershell assembly, improves startup time of powershell by 10x
$old_path = $env:path
try {
$env:path = [Runtime.InteropServices.RuntimeEnvironment]::GetRuntimeDirectory()
[AppDomain]::CurrentDomain.GetAssemblies() | % {
if (! $_.location) {continue}
$Name = Split-Path $_.location -leaf
if ($Name.startswith("Microsoft.PowerShell.")) {
Write-Progress -Activity "Native Image Installation" -Status "$name"
ngen install $_.location | % {"`t$_"}
}
}
} finally {
$env:path = $old_path
}
}
Optimize-PowershellAssemblies
PowerShell is used by every Windows Ansible module. This optimization reduces
the time PowerShell takes to start up, removing that overhead from every invocation.
This snippet uses `the native image generator, ngen <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#WhenToUse>`_
to pre-emptively create native images for the assemblies that PowerShell relies on.
Fix high-CPU-on-boot for VMs/cloud instances
--------------------------------------------
If you are creating golden images to spawn instances from, you can avoid a disruptive
high CPU task near startup via `processing the ngen queue <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-image-service>`_
within your golden image creation, if you know the CPU types won't change between
golden image build process and runtime.
Place the following near the end of your playbook, bearing in mind the factors that can cause native images to be invalidated (`see MSDN <https://docs.microsoft.com/en-us/dotnet/framework/tools/ngen-exe-native-image-generator#native-images-and-jit-compilation>`_).
.. code-block:: yaml
- name: generate native .NET images for CPU
win_dotnet_ngen:
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,003 |
Docs: Replace latin terms with english in the os_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/os_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/os_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/os_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79003
|
https://github.com/ansible/ansible/pull/79007
|
78c9fb415954ca630f028fe7a7d154658fc41422
|
55925958ea8ea48273c7ade660ceea0e9e24f348
| 2022-10-03T20:11:55Z |
python
| 2022-10-03T20:51:44Z |
docs/docsite/rst/os_guide/windows_usage.rst
|
.. _windows_usage:
Using Ansible and Windows
=========================
When using Ansible to manage Windows, many of the syntax and rules that apply
for Unix/Linux hosts also apply to Windows, but there are still some differences
when it comes to components like path separators and OS-specific tasks.
This document covers details specific to using Ansible for Windows.
.. contents:: Topics
:local:
Use Cases
`````````
Ansible can be used to orchestrate a multitude of tasks on Windows servers.
Below are some examples and info about common tasks.
Installing Software
-------------------
There are three main ways that Ansible can be used to install software:
* Using the ``win_chocolatey`` module. This sources the program data from the default
public `Chocolatey <https://chocolatey.org/>`_ repository. Internal repositories can
be used instead by setting the ``source`` option.
* Using the ``win_package`` module. This installs software using an MSI or .exe installer
from a local/network path or URL.
* Using the ``win_command`` or ``win_shell`` module to run an installer manually.
The ``win_chocolatey`` module is recommended since it has the most complete logic for checking to see if a package has already been installed and is up-to-date.
Below are some examples of using all three options to install 7-Zip:
.. code-block:: yaml+jinja
# Install/uninstall with chocolatey
- name: Ensure 7-Zip is installed via Chocolatey
win_chocolatey:
name: 7zip
state: present
- name: Ensure 7-Zip is not installed via Chocolatey
win_chocolatey:
name: 7zip
state: absent
# Install/uninstall with win_package
- name: Download the 7-Zip package
win_get_url:
url: https://www.7-zip.org/a/7z1701-x64.msi
dest: C:\temp\7z.msi
- name: Ensure 7-Zip is installed via win_package
win_package:
path: C:\temp\7z.msi
state: present
- name: Ensure 7-Zip is not installed via win_package
win_package:
path: C:\temp\7z.msi
state: absent
# Install/uninstall with win_command
- name: Download the 7-Zip package
win_get_url:
url: https://www.7-zip.org/a/7z1701-x64.msi
dest: C:\temp\7z.msi
- name: Check if 7-Zip is already installed
win_reg_stat:
name: HKLM:\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\{23170F69-40C1-2702-1701-000001000000}
register: 7zip_installed
- name: Ensure 7-Zip is installed via win_command
win_command: C:\Windows\System32\msiexec.exe /i C:\temp\7z.msi /qn /norestart
when: 7zip_installed.exists == false
- name: Ensure 7-Zip is uninstalled via win_command
win_command: C:\Windows\System32\msiexec.exe /x {23170F69-40C1-2702-1701-000001000000} /qn /norestart
when: 7zip_installed.exists == true
Some installers like Microsoft Office or SQL Server require credential delegation or
access to components restricted by WinRM. The best method to bypass these
issues is to use ``become`` with the task. With ``become``, Ansible will run
the installer as if it were run interactively on the host.
.. Note:: Many installers do not properly pass back error information over WinRM. In these cases, if the install has been verified to work locally the recommended method is to use become.
.. Note:: Some installers restart the WinRM or HTTP services, or cause them to become temporarily unavailable, making Ansible assume the system is unreachable.
Installing Updates
------------------
The ``win_updates`` and ``win_hotfix`` modules can be used to install updates
or hotfixes on a host. The module ``win_updates`` is used to install multiple
updates by category, while ``win_hotfix`` can be used to install a single
update or hotfix file that has been downloaded locally.
.. Note:: The ``win_hotfix`` module has a requirement that the DISM PowerShell cmdlets are
present. These cmdlets were only added by default on Windows Server 2012
and newer and must be installed on older Windows hosts.
The following example shows how ``win_updates`` can be used:
.. code-block:: yaml+jinja
- name: Install all critical and security updates
win_updates:
category_names:
- CriticalUpdates
- SecurityUpdates
state: installed
register: update_result
- name: Reboot host if required
win_reboot:
when: update_result.reboot_required
The following example show how ``win_hotfix`` can be used to install a single
update or hotfix:
.. code-block:: yaml+jinja
- name: Download KB3172729 for Server 2012 R2
win_get_url:
url: http://download.windowsupdate.com/d/msdownload/update/software/secu/2016/07/windows8.1-kb3172729-x64_e8003822a7ef4705cbb65623b72fd3cec73fe222.msu
dest: C:\temp\KB3172729.msu
- name: Install hotfix
win_hotfix:
hotfix_kb: KB3172729
source: C:\temp\KB3172729.msu
state: present
register: hotfix_result
- name: Reboot host if required
win_reboot:
when: hotfix_result.reboot_required
Set Up Users and Groups
-----------------------
Ansible can be used to create Windows users and groups both locally and on a domain.
Local
+++++
The modules ``win_user``, ``win_group`` and ``win_group_membership`` manage
Windows users, groups and group memberships locally.
The following is an example of creating local accounts and groups that can
access a folder on the same host:
.. code-block:: yaml+jinja
- name: Create local group to contain new users
win_group:
name: LocalGroup
description: Allow access to C:\Development folder
- name: Create local user
win_user:
name: '{{ item.name }}'
password: '{{ item.password }}'
groups: LocalGroup
update_password: false
password_never_expires: true
loop:
- name: User1
password: Password1
- name: User2
password: Password2
- name: Create Development folder
win_file:
path: C:\Development
state: directory
- name: Set ACL of Development folder
win_acl:
path: C:\Development
rights: FullControl
state: present
type: allow
user: LocalGroup
- name: Remove parent inheritance of Development folder
win_acl_inheritance:
path: C:\Development
reorganize: true
state: absent
Domain
++++++
The modules ``win_domain_user`` and ``win_domain_group`` manages users and
groups in a domain. The below is an example of ensuring a batch of domain users
are created:
.. code-block:: yaml+jinja
- name: Ensure each account is created
win_domain_user:
name: '{{ item.name }}'
upn: '{{ item.name }}@MY.DOMAIN.COM'
password: '{{ item.password }}'
password_never_expires: false
groups:
- Test User
- Application
company: Ansible
update_password: on_create
loop:
- name: Test User
password: Password
- name: Admin User
password: SuperSecretPass01
- name: Dev User
password: '@fvr3IbFBujSRh!3hBg%wgFucD8^x8W5'
Running Commands
----------------
In cases where there is no appropriate module available for a task,
a command or script can be run using the ``win_shell``, ``win_command``, ``raw``, and ``script`` modules.
The ``raw`` module simply executes a Powershell command remotely. Since ``raw``
has none of the wrappers that Ansible typically uses, ``become``, ``async``
and environment variables do not work.
The ``script`` module executes a script from the Ansible controller on
one or more Windows hosts. Like ``raw``, ``script`` currently does not support
``become``, ``async``, or environment variables.
The ``win_command`` module is used to execute a command which is either an
executable or batch file, while the ``win_shell`` module is used to execute commands within a shell.
Choosing Command or Shell
+++++++++++++++++++++++++
The ``win_shell`` and ``win_command`` modules can both be used to execute a command or commands.
The ``win_shell`` module is run within a shell-like process like ``PowerShell`` or ``cmd``, so it has access to shell
operators like ``<``, ``>``, ``|``, ``;``, ``&&``, and ``||``. Multi-lined commands can also be run in ``win_shell``.
The ``win_command`` module simply runs a process outside of a shell. It can still
run a shell command like ``mkdir`` or ``New-Item`` by passing the shell commands
to a shell executable like ``cmd.exe`` or ``PowerShell.exe``.
Here are some examples of using ``win_command`` and ``win_shell``:
.. code-block:: yaml+jinja
- name: Run a command under PowerShell
win_shell: Get-Service -Name service | Stop-Service
- name: Run a command under cmd
win_shell: mkdir C:\temp
args:
executable: cmd.exe
- name: Run a multiple shell commands
win_shell: |
New-Item -Path C:\temp -ItemType Directory
Remove-Item -Path C:\temp -Force -Recurse
$path_info = Get-Item -Path C:\temp
$path_info.FullName
- name: Run an executable using win_command
win_command: whoami.exe
- name: Run a cmd command
win_command: cmd.exe /c mkdir C:\temp
- name: Run a vbs script
win_command: cscript.exe script.vbs
.. Note:: Some commands like ``mkdir``, ``del``, and ``copy`` only exist in
the CMD shell. To run them with ``win_command`` they must be
prefixed with ``cmd.exe /c``.
Argument Rules
++++++++++++++
When running a command through ``win_command``, the standard Windows argument
rules apply:
* Each argument is delimited by a white space, which can either be a space or a
tab.
* An argument can be surrounded by double quotes ``"``. Anything inside these
quotes is interpreted as a single argument even if it contains whitespace.
* A double quote preceded by a backslash ``\`` is interpreted as just a double
quote ``"`` and not as an argument delimiter.
* Backslashes are interpreted literally unless it immediately precedes double
quotes; for example ``\`` == ``\`` and ``\"`` == ``"``
* If an even number of backslashes is followed by a double quote, one
backslash is used in the argument for every pair, and the double quote is
used as a string delimiter for the argument.
* If an odd number of backslashes is followed by a double quote, one backslash
is used in the argument for every pair, and the double quote is escaped and
made a literal double quote in the argument.
With those rules in mind, here are some examples of quoting:
.. code-block:: yaml+jinja
- win_command: C:\temp\executable.exe argument1 "argument 2" "C:\path\with space" "double \"quoted\""
argv[0] = C:\temp\executable.exe
argv[1] = argument1
argv[2] = argument 2
argv[3] = C:\path\with space
argv[4] = double "quoted"
- win_command: '"C:\Program Files\Program\program.exe" "escaped \\\" backslash" unquoted-end-backslash\'
argv[0] = C:\Program Files\Program\program.exe
argv[1] = escaped \" backslash
argv[2] = unquoted-end-backslash\
# Due to YAML and Ansible parsing '\"' must be written as '{% raw %}\\{% endraw %}"'
- win_command: C:\temp\executable.exe C:\no\space\path "arg with end \ before end quote{% raw %}\\{% endraw %}"
argv[0] = C:\temp\executable.exe
argv[1] = C:\no\space\path
argv[2] = arg with end \ before end quote\"
For more information, see `escaping arguments <https://msdn.microsoft.com/en-us/library/17w5ykft(v=vs.85).aspx>`_.
Creating and Running a Scheduled Task
-------------------------------------
WinRM has some restrictions in place that cause errors when running certain
commands. One way to bypass these restrictions is to run a command through a
scheduled task. A scheduled task is a Windows component that provides the
ability to run an executable on a schedule and under a different account.
Ansible version 2.5 added modules that make it easier to work with scheduled tasks in Windows.
The following is an example of running a script as a scheduled task that deletes itself after
running:
.. code-block:: yaml+jinja
- name: Create scheduled task to run a process
win_scheduled_task:
name: adhoc-task
username: SYSTEM
actions:
- path: PowerShell.exe
arguments: |
Start-Sleep -Seconds 30 # This isn't required, just here as a demonstration
New-Item -Path C:\temp\test -ItemType Directory
# Remove this action if the task shouldn't be deleted on completion
- path: cmd.exe
arguments: /c schtasks.exe /Delete /TN "adhoc-task" /F
triggers:
- type: registration
- name: Wait for the scheduled task to complete
win_scheduled_task_stat:
name: adhoc-task
register: task_stat
until: (task_stat.state is defined and task_stat.state.status != "TASK_STATE_RUNNING") or (task_stat.task_exists == False)
retries: 12
delay: 10
.. Note:: The modules used in the above example were updated/added in Ansible
version 2.5.
Path Formatting for Windows
```````````````````````````
Windows differs from a traditional POSIX operating system in many ways. One of
the major changes is the shift from ``/`` as the path separator to ``\``. This
can cause major issues with how playbooks are written, since ``\`` is often used
as an escape character on POSIX systems.
Ansible allows two different styles of syntax; each deals with path separators for Windows differently:
YAML Style
----------
When using the YAML syntax for tasks, the rules are well-defined by the YAML
standard:
* When using a normal string (without quotes), YAML will not consider the
backslash an escape character.
* When using single quotes ``'``, YAML will not consider the backslash an
escape character.
* When using double quotes ``"``, the backslash is considered an escape
character and needs to escaped with another backslash.
.. Note:: You should only quote strings when it is absolutely
necessary or required by YAML, and then use single quotes.
The YAML specification considers the following `escape sequences <https://yaml.org/spec/current.html#id2517668>`_:
* ``\0``, ``\\``, ``\"``, ``\_``, ``\a``, ``\b``, ``\e``, ``\f``, ``\n``, ``\r``, ``\t``,
``\v``, ``\L``, ``\N`` and ``\P`` -- Single character escape
* ``<TAB>``, ``<SPACE>``, ``<NBSP>``, ``<LNSP>``, ``<PSP>`` -- Special
characters
* ``\x..`` -- 2-digit hex escape
* ``\u....`` -- 4-digit hex escape
* ``\U........`` -- 8-digit hex escape
Here are some examples on how to write Windows paths:
.. code-block:: ini
# GOOD
tempdir: C:\Windows\Temp
# WORKS
tempdir: 'C:\Windows\Temp'
tempdir: "C:\\Windows\\Temp"
# BAD, BUT SOMETIMES WORKS
tempdir: C:\\Windows\\Temp
tempdir: 'C:\\Windows\\Temp'
tempdir: C:/Windows/Temp
This is an example which will fail:
.. code-block:: text
# FAILS
tempdir: "C:\Windows\Temp"
This example shows the use of single quotes when they are required:
.. code-block:: yaml+jinja
---
- name: Copy tomcat config
win_copy:
src: log4j.xml
dest: '{{tc_home}}\lib\log4j.xml'
Legacy key=value Style
----------------------
The legacy ``key=value`` syntax is used on the command line for ad hoc commands,
or inside playbooks. The use of this style is discouraged within playbooks
because backslash characters need to be escaped, making playbooks harder to read.
The legacy syntax depends on the specific implementation in Ansible, and quoting
(both single and double) does not have any effect on how it is parsed by
Ansible.
The Ansible key=value parser parse_kv() considers the following escape
sequences:
* ``\``, ``'``, ``"``, ``\a``, ``\b``, ``\f``, ``\n``, ``\r``, ``\t`` and
``\v`` -- Single character escape
* ``\x..`` -- 2-digit hex escape
* ``\u....`` -- 4-digit hex escape
* ``\U........`` -- 8-digit hex escape
* ``\N{...}`` -- Unicode character by name
This means that the backslash is an escape character for some sequences, and it
is usually safer to escape a backslash when in this form.
Here are some examples of using Windows paths with the key=value style:
.. code-block:: ini
# GOOD
tempdir=C:\\Windows\\Temp
# WORKS
tempdir='C:\\Windows\\Temp'
tempdir="C:\\Windows\\Temp"
# BAD, BUT SOMETIMES WORKS
tempdir=C:\Windows\Temp
tempdir='C:\Windows\Temp'
tempdir="C:\Windows\Temp"
tempdir=C:/Windows/Temp
# FAILS
tempdir=C:\Windows\temp
tempdir='C:\Windows\temp'
tempdir="C:\Windows\temp"
The failing examples don't fail outright but will substitute ``\t`` with the
``<TAB>`` character resulting in ``tempdir`` being ``C:\Windows<TAB>emp``.
Limitations
```````````
Some things you cannot do with Ansible and Windows are:
* Upgrade PowerShell
* Interact with the WinRM listeners
Because WinRM is reliant on the services being online and running during normal operations, you cannot upgrade PowerShell or interact with WinRM listeners with Ansible. Both of these actions will cause the connection to fail. This can technically be avoided by using ``async`` or a scheduled task, but those methods are fragile if the process it runs breaks the underlying connection Ansible uses, and are best left to the bootstrapping process or before an image is
created.
Developing Windows Modules
``````````````````````````
Because Ansible modules for Windows are written in PowerShell, the development
guides for Windows modules differ substantially from those for standard standard modules. Please see
:ref:`developing_modules_general_windows` for more information.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,003 |
Docs: Replace latin terms with english in the os_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/os_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/os_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/os_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79003
|
https://github.com/ansible/ansible/pull/79007
|
78c9fb415954ca630f028fe7a7d154658fc41422
|
55925958ea8ea48273c7ade660ceea0e9e24f348
| 2022-10-03T20:11:55Z |
python
| 2022-10-03T20:51:44Z |
docs/docsite/rst/os_guide/windows_winrm.rst
|
.. _windows_winrm:
Windows Remote Management
=========================
Unlike Linux/Unix hosts, which use SSH by default, Windows hosts are
configured with WinRM. This topic covers how to configure and use WinRM with Ansible.
.. contents::
:local:
:depth: 2
What is WinRM?
----------------
WinRM is a management protocol used by Windows to remotely communicate with
another server. It is a SOAP-based protocol that communicates over HTTP/HTTPS, and is
included in all recent Windows operating systems. Since Windows
Server 2012, WinRM has been enabled by default, but in most cases extra
configuration is required to use WinRM with Ansible.
Ansible uses the `pywinrm <https://github.com/diyan/pywinrm>`_ package to
communicate with Windows servers over WinRM. It is not installed by default
with the Ansible package, but can be installed by running the following:
.. code-block:: shell
pip install "pywinrm>=0.3.0"
.. Note:: on distributions with multiple python versions, use pip2 or pip2.x,
where x matches the python minor version Ansible is running under.
.. Warning::
Using the ``winrm`` or ``psrp`` connection plugins in Ansible on MacOS in
the latest releases typically fail. This is a known problem that occurs
deep within the Python stack and cannot be changed by Ansible. The only
workaround today is to set the environment variable ``no_proxy=*`` and
avoid using Kerberos auth.
.. _winrm_auth:
WinRM authentication options
-----------------------------
When connecting to a Windows host, there are several different options that can be used
when authenticating with an account. The authentication type may be set on inventory
hosts or groups with the ``ansible_winrm_transport`` variable.
The following matrix is a high level overview of the options:
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Option | Local Accounts | Active Directory Accounts | Credential Delegation | HTTP Encryption |
+=============+================+===========================+=======================+=================+
| Basic | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Certificate | Yes | No | No | No |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| Kerberos | No | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| NTLM | Yes | Yes | No | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
| CredSSP | Yes | Yes | Yes | Yes |
+-------------+----------------+---------------------------+-----------------------+-----------------+
.. _winrm_basic:
Basic
^^^^^^
Basic authentication is one of the simplest authentication options to use, but is
also the most insecure. This is because the username and password are simply
base64 encoded, and if a secure channel is not in use (eg, HTTPS) then it can be
decoded by anyone. Basic authentication can only be used for local accounts (not domain accounts).
The following example shows host vars configured for basic authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: basic
Basic authentication is not enabled by default on a Windows host but can be
enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Basic -Value $true
.. _winrm_certificate:
Certificate
^^^^^^^^^^^^
Certificate authentication uses certificates as keys similar to SSH key
pairs, but the file format and key generation process is different.
The following example shows host vars configured for certificate authentication:
.. code-block:: yaml+jinja
ansible_connection: winrm
ansible_winrm_cert_pem: /path/to/certificate/public/key.pem
ansible_winrm_cert_key_pem: /path/to/certificate/private/key.pem
ansible_winrm_transport: certificate
Certificate authentication is not enabled by default on a Windows host but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\Auth\Certificate -Value $true
.. Note:: Encrypted private keys cannot be used as the urllib3 library that
is used by Ansible for WinRM does not support this functionality.
.._winrm_certificate_generate:
Generate a Certificate
++++++++++++++++++++++
A certificate must be generated before it can be mapped to a local user.
This can be done using one of the following methods:
* OpenSSL
* PowerShell, using the ``New-SelfSignedCertificate`` cmdlet
* Active Directory Certificate Services
Active Directory Certificate Services is beyond of scope in this documentation but may be
the best option to use when running in a domain environment. For more information,
see the `Active Directory Certificate Services documentation <https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc732625(v=ws.11)>`_.
.. Note:: Using the PowerShell cmdlet ``New-SelfSignedCertificate`` to generate
a certificate for authentication only works when being generated from a
Windows 10 or Windows Server 2012 R2 host or later. OpenSSL is still required to
extract the private key from the PFX certificate to a PEM file for Ansible
to use.
To generate a certificate with ``OpenSSL``:
.. code-block:: shell
# Set the name of the local user that will have the key mapped to
USERNAME="username"
cat > openssl.conf << EOL
distinguished_name = req_distinguished_name
[req_distinguished_name]
[v3_req_client]
extendedKeyUsage = clientAuth
subjectAltName = otherName:1.3.6.1.4.1.311.20.2.3;UTF8:$USERNAME@localhost
EOL
export OPENSSL_CONF=openssl.conf
openssl req -x509 -nodes -days 3650 -newkey rsa:2048 -out cert.pem -outform PEM -keyout cert_key.pem -subj "/CN=$USERNAME" -extensions v3_req_client
rm openssl.conf
To generate a certificate with ``New-SelfSignedCertificate``:
.. code-block:: powershell
# Set the name of the local user that will have the key mapped
$username = "username"
$output_path = "C:\temp"
# Instead of generating a file, the cert will be added to the personal
# LocalComputer folder in the certificate store
$cert = New-SelfSignedCertificate -Type Custom `
-Subject "CN=$username" `
-TextExtension @("2.5.29.37={text}1.3.6.1.5.5.7.3.2","2.5.29.17={text}upn=$username@localhost") `
-KeyUsage DigitalSignature,KeyEncipherment `
-KeyAlgorithm RSA `
-KeyLength 2048
# Export the public key
$pem_output = @()
$pem_output += "-----BEGIN CERTIFICATE-----"
$pem_output += [System.Convert]::ToBase64String($cert.RawData) -replace ".{64}", "$&`n"
$pem_output += "-----END CERTIFICATE-----"
[System.IO.File]::WriteAllLines("$output_path\cert.pem", $pem_output)
# Export the private key in a PFX file
[System.IO.File]::WriteAllBytes("$output_path\cert.pfx", $cert.Export("Pfx"))
.. Note:: To convert the PFX file to a private key that pywinrm can use, run
the following command with OpenSSL
``openssl pkcs12 -in cert.pfx -nocerts -nodes -out cert_key.pem -passin pass: -passout pass:``
.. _winrm_certificate_import:
Import a Certificate to the Certificate Store
+++++++++++++++++++++++++++++++++++++++++++++
Once a certificate has been generated, the issuing certificate needs to be
imported into the ``Trusted Root Certificate Authorities`` of the
``LocalMachine`` store, and the client certificate public key must be present
in the ``Trusted People`` folder of the ``LocalMachine`` store. For this example,
both the issuing certificate and public key are the same.
Following example shows how to import the issuing certificate:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::Root
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. Note:: If using ADCS to generate the certificate, then the issuing
certificate will already be imported and this step can be skipped.
The code to import the client certificate public key is:
.. code-block:: powershell
$cert = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Certificate2 "cert.pem"
$store_name = [System.Security.Cryptography.X509Certificates.StoreName]::TrustedPeople
$store_location = [System.Security.Cryptography.X509Certificates.StoreLocation]::LocalMachine
$store = New-Object -TypeName System.Security.Cryptography.X509Certificates.X509Store -ArgumentList $store_name, $store_location
$store.Open("MaxAllowed")
$store.Add($cert)
$store.Close()
.. _winrm_certificate_mapping:
Mapping a Certificate to an Account
+++++++++++++++++++++++++++++++++++
Once the certificate has been imported, map it to the local user account:
.. code-block:: powershell
$username = "username"
$password = ConvertTo-SecureString -String "password" -AsPlainText -Force
$credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $username, $password
# This is the issuer thumbprint which in the case of a self generated cert
# is the public key thumbprint, additional logic may be required for other
# scenarios
$thumbprint = (Get-ChildItem -Path cert:\LocalMachine\root | Where-Object { $_.Subject -eq "CN=$username" }).Thumbprint
New-Item -Path WSMan:\localhost\ClientCertificate `
-Subject "$username@localhost" `
-URI * `
-Issuer $thumbprint `
-Credential $credential `
-Force
Once this is complete, the hostvar ``ansible_winrm_cert_pem`` should be set to
the path of the public key and the ``ansible_winrm_cert_key_pem`` variable should be set to
the path of the private key.
.. _winrm_ntlm:
NTLM
^^^^^
NTLM is an older authentication mechanism used by Microsoft that can support
both local and domain accounts. NTLM is enabled by default on the WinRM
service, so no setup is required before using it.
NTLM is the easiest authentication protocol to use and is more secure than
``Basic`` authentication. If running in a domain environment, ``Kerberos`` should be used
instead of NTLM.
Kerberos has several advantages over using NTLM:
* NTLM is an older protocol and does not support newer encryption
protocols.
* NTLM is slower to authenticate because it requires more round trips to the host in
the authentication stage.
* Unlike Kerberos, NTLM does not allow credential delegation.
This example shows host variables configured to use NTLM authentication:
.. code-block:: yaml+jinja
ansible_user: LocalUsername
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: ntlm
.. _winrm_kerberos:
Kerberos
^^^^^^^^^
Kerberos is the recommended authentication option to use when running in a
domain environment. Kerberos supports features like credential delegation and
message encryption over HTTP and is one of the more secure options that
is available through WinRM.
Kerberos requires some additional setup work on the Ansible host before it can be
used properly.
The following example shows host vars configured for Kerberos authentication:
.. code-block:: yaml+jinja
ansible_user: [email protected]
ansible_password: Password
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
As of Ansible version 2.3, the Kerberos ticket will be created based on
``ansible_user`` and ``ansible_password``. If running on an older version of
Ansible or when ``ansible_winrm_kinit_mode`` is ``manual``, a Kerberos
ticket must already be obtained. See below for more details.
There are some extra host variables that can be set:
.. code-block:: yaml
ansible_winrm_kinit_mode: managed/manual (manual means Ansible will not obtain a ticket)
ansible_winrm_kinit_cmd: the kinit binary to use to obtain a Kerberos ticket (default to kinit)
ansible_winrm_service: overrides the SPN prefix that is used, the default is ``HTTP`` and should rarely ever need changing
ansible_winrm_kerberos_delegation: allows the credentials to traverse multiple hops
ansible_winrm_kerberos_hostname_override: the hostname to be used for the kerberos exchange
.. _winrm_kerberos_install:
Installing the Kerberos Library
+++++++++++++++++++++++++++++++
Some system dependencies that must be installed prior to using Kerberos. The script below lists the dependencies based on the distro:
.. code-block:: shell
# Via Yum (RHEL/Centos/Fedora for the older version)
yum -y install gcc python-devel krb5-devel krb5-libs krb5-workstation
# Via DNF (RHEL/Centos/Fedora for the newer version)
dnf -y install gcc python3-devel krb5-devel krb5-libs krb5-workstation
# Via Apt (Ubuntu)
sudo apt-get install python-dev libkrb5-dev krb5-user
# Via Portage (Gentoo)
emerge -av app-crypt/mit-krb5
emerge -av dev-python/setuptools
# Via Pkg (FreeBSD)
sudo pkg install security/krb5
# Via OpenCSW (Solaris)
pkgadd -d http://get.opencsw.org/now
/opt/csw/bin/pkgutil -U
/opt/csw/bin/pkgutil -y -i libkrb5_3
# Via Pacman (Arch Linux)
pacman -S krb5
Once the dependencies have been installed, the ``python-kerberos`` wrapper can
be install using ``pip``:
.. code-block:: shell
pip install pywinrm[kerberos]
.. note::
While Ansible has supported Kerberos auth through ``pywinrm`` for some
time, optional features or more secure options may only be available in
newer versions of the ``pywinrm`` and/or ``pykerberos`` libraries. It is
recommended you upgrade each version to the latest available to resolve
any warnings or errors. This can be done through tools like ``pip`` or a
system package manager like ``dnf``, ``yum``, ``apt`` but the package
names and versions available may differ between tools.
.. _winrm_kerberos_config:
Configuring Host Kerberos
+++++++++++++++++++++++++
Once the dependencies have been installed, Kerberos needs to be configured so
that it can communicate with a domain. This configuration is done through the
``/etc/krb5.conf`` file, which is installed with the packages in the script above.
To configure Kerberos, in the section that starts with:
.. code-block:: ini
[realms]
Add the full domain name and the fully qualified domain names of the primary
and secondary Active Directory domain controllers. It should look something
like this:
.. code-block:: ini
[realms]
MY.DOMAIN.COM = {
kdc = domain-controller1.my.domain.com
kdc = domain-controller2.my.domain.com
}
In the section that starts with:
.. code-block:: ini
[domain_realm]
Add a line like the following for each domain that Ansible needs access for:
.. code-block:: ini
[domain_realm]
.my.domain.com = MY.DOMAIN.COM
You can configure other settings in this file such as the default domain. See
`krb5.conf <https://web.mit.edu/kerberos/krb5-1.12/doc/admin/conf_files/krb5_conf.html>`_
for more details.
.. _winrm_kerberos_ticket_auto:
Automatic Kerberos Ticket Management
++++++++++++++++++++++++++++++++++++
Ansible version 2.3 and later defaults to automatically managing Kerberos tickets
when both ``ansible_user`` and ``ansible_password`` are specified for a host. In
this process, a new ticket is created in a temporary credential cache for each
host. This is done before each task executes to minimize the chance of ticket
expiration. The temporary credential caches are deleted after each task
completes and will not interfere with the default credential cache.
To disable automatic ticket management, set ``ansible_winrm_kinit_mode=manual``
via the inventory.
Automatic ticket management requires a standard ``kinit`` binary on the control
host system path. To specify a different location or binary name, set the
``ansible_winrm_kinit_cmd`` hostvar to the fully qualified path to a MIT krbv5
``kinit``-compatible binary.
.. _winrm_kerberos_ticket_manual:
Manual Kerberos Ticket Management
+++++++++++++++++++++++++++++++++
To manually manage Kerberos tickets, the ``kinit`` binary is used. To
obtain a new ticket the following command is used:
.. code-block:: shell
kinit [email protected]
.. Note:: The domain must match the configured Kerberos realm exactly, and must be in upper case.
To see what tickets (if any) have been acquired, use the following command:
.. code-block:: shell
klist
To destroy all the tickets that have been acquired, use the following command:
.. code-block:: shell
kdestroy
.. _winrm_kerberos_troubleshoot:
Troubleshooting Kerberos
++++++++++++++++++++++++
Kerberos is reliant on a properly-configured environment to
work. To troubleshoot Kerberos issues, ensure that:
* The hostname set for the Windows host is the FQDN and not an IP address.
* If you connect using an IP address you will get the error message `Server not found in Kerberos database`.
* To determine if you are connecting using an IP address or an FQDN run your playbook (or call the ``win_ping`` module) using the `-vvv` flag.
* The forward and reverse DNS lookups are working properly in the domain. To
test this, ping the windows host by name and then use the ip address returned
with ``nslookup``. The same name should be returned when using ``nslookup``
on the IP address.
* The Ansible host's clock is synchronized with the domain controller. Kerberos
is time sensitive, and a little clock drift can cause the ticket generation
process to fail.
* Ensure that the fully qualified domain name for the domain is configured in
the ``krb5.conf`` file. To check this, run:
.. code-block:: console
kinit -C [email protected]
klist
If the domain name returned by ``klist`` is different from the one requested,
an alias is being used. The ``krb5.conf`` file needs to be updated so that
the fully qualified domain name is used and not an alias.
* If the default kerberos tooling has been replaced or modified (some IdM solutions may do this), this may cause issues when installing or upgrading the Python Kerberos library. As of the time of this writing, this library is called ``pykerberos`` and is known to work with both MIT and Heimdal Kerberos libraries. To resolve ``pykerberos`` installation issues, ensure the system dependencies for Kerberos have been met (see: `Installing the Kerberos Library`_), remove any custom Kerberos tooling paths from the PATH environment variable, and retry the installation of Python Kerberos library package.
.. _winrm_credssp:
CredSSP
^^^^^^^
CredSSP authentication is a newer authentication protocol that allows
credential delegation. This is achieved by encrypting the username and password
after authentication has succeeded and sending that to the server using the
CredSSP protocol.
Because the username and password are sent to the server to be used for double
hop authentication, ensure that the hosts that the Windows host communicates with are
not compromised and are trusted.
CredSSP can be used for both local and domain accounts and also supports
message encryption over HTTP.
To use CredSSP authentication, the host vars are configured like so:
.. code-block:: yaml+jinja
ansible_user: Username
ansible_password: Password
ansible_connection: winrm
ansible_winrm_transport: credssp
There are some extra host variables that can be set as shown below:
.. code-block:: yaml
ansible_winrm_credssp_disable_tlsv1_2: when true, will not use TLS 1.2 in the CredSSP auth process
CredSSP authentication is not enabled by default on a Windows host, but can
be enabled by running the following in PowerShell:
.. code-block:: powershell
Enable-WSManCredSSP -Role Server -Force
.. _winrm_credssp_install:
Installing CredSSP Library
++++++++++++++++++++++++++
The ``requests-credssp`` wrapper can be installed using ``pip``:
.. code-block:: bash
pip install pywinrm[credssp]
.. _winrm_credssp_tls:
CredSSP and TLS 1.2
+++++++++++++++++++
By default the ``requests-credssp`` library is configured to authenticate over
the TLS 1.2 protocol. TLS 1.2 is installed and enabled by default for Windows Server 2012
and Windows 8 and more recent releases.
There are two ways that older hosts can be used with CredSSP:
* Install and enable a hotfix to enable TLS 1.2 support (recommended
for Server 2008 R2 and Windows 7).
* Set ``ansible_winrm_credssp_disable_tlsv1_2=True`` in the inventory to run
over TLS 1.0. This is the only option when connecting to Windows Server 2008, which
has no way of supporting TLS 1.2
See :ref:`winrm_tls12` for more information on how to enable TLS 1.2 on the
Windows host.
.. _winrm _credssp_cert:
Set CredSSP Certificate
+++++++++++++++++++++++
CredSSP works by encrypting the credentials through the TLS protocol and uses a self-signed certificate by default. The ``CertificateThumbprint`` option under the WinRM service configuration can be used to specify the thumbprint of
another certificate.
.. Note:: This certificate configuration is independent of the WinRM listener
certificate. With CredSSP, message transport still occurs over the WinRM listener,
but the TLS-encrypted messages inside the channel use the service-level certificate.
To explicitly set the certificate to use for CredSSP:
.. code-block:: powershell
# Note the value $certificate_thumbprint will be different in each
# situation, this needs to be set based on the cert that is used.
$certificate_thumbprint = "7C8DCBD5427AFEE6560F4AF524E325915F51172C"
# Set the thumbprint value
Set-Item -Path WSMan:\localhost\Service\CertificateThumbprint -Value $certificate_thumbprint
.. _winrm_nonadmin:
Non-Administrator Accounts
---------------------------
WinRM is configured by default to only allow connections from accounts in the local
``Administrators`` group. This can be changed by running:
.. code-block:: powershell
winrm configSDDL default
This will display an ACL editor, where new users or groups may be added. To run commands
over WinRM, users and groups must have at least the ``Read`` and ``Execute`` permissions
enabled.
While non-administrative accounts can be used with WinRM, most typical server administration
tasks require some level of administrative access, so the utility is usually limited.
.. _winrm_encrypt:
WinRM Encryption
-----------------
By default WinRM will fail to work when running over an unencrypted channel.
The WinRM protocol considers the channel to be encrypted if using TLS over HTTP
(HTTPS) or using message level encryption. Using WinRM with TLS is the
recommended option as it works with all authentication options, but requires
a certificate to be created and used on the WinRM listener.
If in a domain environment, ADCS can create a certificate for the host that
is issued by the domain itself.
If using HTTPS is not an option, then HTTP can be used when the authentication
option is ``NTLM``, ``Kerberos`` or ``CredSSP``. These protocols will encrypt
the WinRM payload with their own encryption method before sending it to the
server. The message-level encryption is not used when running over HTTPS because the
encryption uses the more secure TLS protocol instead. If both transport and
message encryption is required, set ``ansible_winrm_message_encryption=always``
in the host vars.
.. Note:: Message encryption over HTTP requires pywinrm>=0.3.0.
A last resort is to disable the encryption requirement on the Windows host. This
should only be used for development and debugging purposes, as anything sent
from Ansible can be viewed, manipulated and also the remote session can completely
be taken over by anyone on the same network. To disable the encryption
requirement:
.. code-block:: powershell
Set-Item -Path WSMan:\localhost\Service\AllowUnencrypted -Value $true
.. Note:: Do not disable the encryption check unless it is
absolutely required. Doing so could allow sensitive information like
credentials and files to be intercepted by others on the network.
.. _winrm_inventory:
Inventory Options
------------------
Ansible's Windows support relies on a few standard variables to indicate the
username, password, and connection type of the remote hosts. These variables
are most easily set up in the inventory, but can be set on the ``host_vars``/
``group_vars`` level.
When setting up the inventory, the following variables are required:
.. code-block:: yaml+jinja
# It is suggested that these be encrypted with ansible-vault:
# ansible-vault edit group_vars/windows.yml
ansible_connection: winrm
# May also be passed on the command-line via --user
ansible_user: Administrator
# May also be supplied at runtime with --ask-pass
ansible_password: SecretPasswordGoesHere
Using the variables above, Ansible will connect to the Windows host with Basic
authentication through HTTPS. If ``ansible_user`` has a UPN value like
``[email protected]`` then the authentication option will automatically attempt
to use Kerberos unless ``ansible_winrm_transport`` has been set to something other than
``kerberos``.
The following custom inventory variables are also supported
for additional configuration of WinRM connections:
* ``ansible_port``: The port WinRM will run over, HTTPS is ``5986`` which is
the default while HTTP is ``5985``
* ``ansible_winrm_scheme``: Specify the connection scheme (``http`` or
``https``) to use for the WinRM connection. Ansible uses ``https`` by default
unless ``ansible_port`` is ``5985``
* ``ansible_winrm_path``: Specify an alternate path to the WinRM endpoint,
Ansible uses ``/wsman`` by default
* ``ansible_winrm_realm``: Specify the realm to use for Kerberos
authentication. If ``ansible_user`` contains ``@``, Ansible will use the part
of the username after ``@`` by default
* ``ansible_winrm_transport``: Specify one or more authentication transport
options as a comma-separated list. By default, Ansible will use ``kerberos,
basic`` if the ``kerberos`` module is installed and a realm is defined,
otherwise it will be ``plaintext``
* ``ansible_winrm_server_cert_validation``: Specify the server certificate
validation mode (``ignore`` or ``validate``). Ansible defaults to
``validate`` on Python 2.7.9 and higher, which will result in certificate
validation errors against the Windows self-signed certificates. Unless
verifiable certificates have been configured on the WinRM listeners, this
should be set to ``ignore``
* ``ansible_winrm_operation_timeout_sec``: Increase the default timeout for
WinRM operations, Ansible uses ``20`` by default
* ``ansible_winrm_read_timeout_sec``: Increase the WinRM read timeout, Ansible
uses ``30`` by default. Useful if there are intermittent network issues and
read timeout errors keep occurring
* ``ansible_winrm_message_encryption``: Specify the message encryption
operation (``auto``, ``always``, ``never``) to use, Ansible uses ``auto`` by
default. ``auto`` means message encryption is only used when
``ansible_winrm_scheme`` is ``http`` and ``ansible_winrm_transport`` supports
message encryption. ``always`` means message encryption will always be used
and ``never`` means message encryption will never be used
* ``ansible_winrm_ca_trust_path``: Used to specify a different cacert container
than the one used in the ``certifi`` module. See the HTTPS Certificate
Validation section for more details.
* ``ansible_winrm_send_cbt``: When using ``ntlm`` or ``kerberos`` over HTTPS,
the authentication library will try to send channel binding tokens to
mitigate against man in the middle attacks. This flag controls whether these
bindings will be sent or not (default: ``yes``).
* ``ansible_winrm_*``: Any additional keyword arguments supported by
``winrm.Protocol`` may be provided in place of ``*``
In addition, there are also specific variables that need to be set
for each authentication option. See the section on authentication above for more information.
.. Note:: Ansible 2.0 has deprecated the "ssh" from ``ansible_ssh_user``,
``ansible_ssh_pass``, ``ansible_ssh_host``, and ``ansible_ssh_port`` to
become ``ansible_user``, ``ansible_password``, ``ansible_host``, and
``ansible_port``. If using a version of Ansible prior to 2.0, the older
style (``ansible_ssh_*``) should be used instead. The shorter variables
are ignored, without warning, in older versions of Ansible.
.. Note:: ``ansible_winrm_message_encryption`` is different from transport
encryption done over TLS. The WinRM payload is still encrypted with TLS
when run over HTTPS, even if ``ansible_winrm_message_encryption=never``.
.. _winrm_ipv6:
IPv6 Addresses
---------------
IPv6 addresses can be used instead of IPv4 addresses or hostnames. This option
is normally set in an inventory. Ansible will attempt to parse the address
using the `ipaddress <https://docs.python.org/3/library/ipaddress.html>`_
package and pass to pywinrm correctly.
When defining a host using an IPv6 address, just add the IPv6 address as you
would an IPv4 address or hostname:
.. code-block:: ini
[windows-server]
2001:db8::1
[windows-server:vars]
ansible_user=username
ansible_password=password
ansible_connection=winrm
.. Note:: The ipaddress library is only included by default in Python 3.x. To
use IPv6 addresses in Python 2.7, make sure to run ``pip install ipaddress`` which installs
a backported package.
.. _winrm_https:
HTTPS Certificate Validation
-----------------------------
As part of the TLS protocol, the certificate is validated to ensure the host
matches the subject and the client trusts the issuer of the server certificate.
When using a self-signed certificate or setting
``ansible_winrm_server_cert_validation: ignore`` these security mechanisms are
bypassed. While self signed certificates will always need the ``ignore`` flag,
certificates that have been issued from a certificate authority can still be
validated.
One of the more common ways of setting up a HTTPS listener in a domain
environment is to use Active Directory Certificate Service (AD CS). AD CS is
used to generate signed certificates from a Certificate Signing Request (CSR).
If the WinRM HTTPS listener is using a certificate that has been signed by
another authority, like AD CS, then Ansible can be set up to trust that
issuer as part of the TLS handshake.
To get Ansible to trust a Certificate Authority (CA) like AD CS, the issuer
certificate of the CA can be exported as a PEM encoded certificate. This
certificate can then be copied locally to the Ansible controller and used as a
source of certificate validation, otherwise known as a CA chain.
The CA chain can contain a single or multiple issuer certificates and each
entry is contained on a new line. To then use the custom CA chain as part of
the validation process, set ``ansible_winrm_ca_trust_path`` to the path of the
file. If this variable is not set, the default CA chain is used instead which
is located in the install path of the Python package
`certifi <https://github.com/certifi/python-certifi>`_.
.. Note:: Each HTTP call is done by the Python requests library which does not
use the systems built-in certificate store as a trust authority.
Certificate validation will fail if the server's certificate issuer is
only added to the system's truststore.
.. _winrm_tls12:
TLS 1.2 Support
----------------
As WinRM runs over the HTTP protocol, using HTTPS means that the TLS protocol
is used to encrypt the WinRM messages. TLS will automatically attempt to
negotiate the best protocol and cipher suite that is available to both the
client and the server. If a match cannot be found then Ansible will error out
with a message similar to:
.. code-block:: ansible-output
HTTPSConnectionPool(host='server', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
Commonly this is when the Windows host has not been configured to support
TLS v1.2 but it could also mean the Ansible controller has an older OpenSSL
version installed.
Windows 8 and Windows Server 2012 come with TLS v1.2 installed and enabled by
default but older hosts, like Server 2008 R2 and Windows 7, have to be enabled
manually.
.. Note:: There is a bug with the TLS 1.2 patch for Server 2008 which will stop
Ansible from connecting to the Windows host. This means that Server 2008
cannot be configured to use TLS 1.2. Server 2008 R2 and Windows 7 are not
affected by this issue and can use TLS 1.2.
To verify what protocol the Windows host supports, you can run the following
command on the Ansible controller:
.. code-block:: shell
openssl s_client -connect <hostname>:5986
The output will contain information about the TLS session and the ``Protocol``
line will display the version that was negotiated:
.. code-block:: console
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1
Cipher : ECDHE-RSA-AES256-SHA
Session-ID: 962A00001C95D2A601BE1CCFA7831B85A7EEE897AECDBF3D9ECD4A3BE4F6AC9B
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976474
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-GCM-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
SSL-Session:
Protocol : TLSv1.2
Cipher : ECDHE-RSA-AES256-GCM-SHA384
Session-ID: AE16000050DA9FD44D03BB8839B64449805D9E43DBD670346D3D9E05D1AEEA84
Session-ID-ctx:
Master-Key: ....
Start Time: 1552976538
Timeout : 7200 (sec)
Verify return code: 21 (unable to verify the first certificate)
If the host is returning ``TLSv1`` then it should be configured so that
TLS v1.2 is enable. You can do this by running the following PowerShell
script:
.. code-block:: powershell
Function Enable-TLS12 {
param(
[ValidateSet("Server", "Client")]
[String]$Component = "Server"
)
$protocols_path = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols'
New-Item -Path "$protocols_path\TLS 1.2\$Component" -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name Enabled -Value 1 -Type DWORD -Force
New-ItemProperty -Path "$protocols_path\TLS 1.2\$Component" -Name DisabledByDefault -Value 0 -Type DWORD -Force
}
Enable-TLS12 -Component Server
# Not required but highly recommended to enable the Client side TLS 1.2 components
Enable-TLS12 -Component Client
Restart-Computer
The below Ansible tasks can also be used to enable TLS v1.2:
.. code-block:: yaml+jinja
- name: enable TLSv1.2 support
win_regedit:
path: HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\{{ item.type }}
name: '{{ item.property }}'
data: '{{ item.value }}'
type: dword
state: present
register: enable_tls12
loop:
- type: Server
property: Enabled
value: 1
- type: Server
property: DisabledByDefault
value: 0
- type: Client
property: Enabled
value: 1
- type: Client
property: DisabledByDefault
value: 0
- name: reboot if TLS config was applied
win_reboot:
when: enable_tls12 is changed
There are other ways to configure the TLS protocols as well as the cipher
suites that are offered by the Windows host. One tool that can give you a GUI
to manage these settings is `IIS Crypto <https://www.nartac.com/Products/IISCrypto/>`_
from Nartac Software.
.. _winrm_limitations:
WinRM limitations
------------------
Due to the design of the WinRM protocol , there are a few limitations
when using WinRM that can cause issues when creating playbooks for Ansible.
These include:
* Credentials are not delegated for most authentication types, which causes
authentication errors when accessing network resources or installing certain
programs.
* Many calls to the Windows Update API are blocked when running over WinRM.
* Some programs fail to install with WinRM due to no credential delegation or
because they access forbidden Windows API like WUA over WinRM.
* Commands under WinRM are done under a non-interactive session, which can prevent
certain commands or executables from running.
* You cannot run a process that interacts with ``DPAPI``, which is used by some
installers (like Microsoft SQL Server).
Some of these limitations can be mitigated by doing one of the following:
* Set ``ansible_winrm_transport`` to ``credssp`` or ``kerberos`` (with
``ansible_winrm_kerberos_delegation=true``) to bypass the double hop issue
and access network resources
* Use ``become`` to bypass all WinRM restrictions and run a command as it would
locally. Unlike using an authentication transport like ``credssp``, this will
also remove the non-interactive restriction and API restrictions like WUA and
DPAPI
* Use a scheduled task to run a command which can be created with the
``win_scheduled_task`` module. Like ``become``, this bypasses all WinRM
restrictions but can only run a command and not modules.
.. seealso::
:ref:`playbooks_intro`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
:ref:`List of Windows Modules <windows_modules>`
Windows specific module list, all implemented in PowerShell
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,006 |
Docs: Replace latin terms with english in the playbook_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/playbook_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/playbook_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/playbook_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79006
|
https://github.com/ansible/ansible/pull/79009
|
173ddde126da34f522f80009ceb8bb25b62a5c92
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
| 2022-10-03T20:15:09Z |
python
| 2022-10-04T02:35:15Z |
docs/docsite/rst/playbook_guide/playbooks_intro.rst
|
.. _about_playbooks:
.. _playbooks_intro:
*****************
Ansible playbooks
*****************
Ansible Playbooks offer a repeatable, re-usable, simple configuration management and multi-machine deployment system, one that is well suited to deploying complex applications. If you need to execute a task with Ansible more than once, write a playbook and put it under source control. Then you can use the playbook to push out new configuration or confirm the configuration of remote systems. The playbooks in the `ansible-examples repository <https://github.com/ansible/ansible-examples>`_ illustrate many useful techniques. You may want to look at these in another tab as you read the documentation.
Playbooks can:
* declare configurations
* orchestrate steps of any manual ordered process, on multiple sets of machines, in a defined order
* launch tasks synchronously or :ref:`asynchronously <playbooks_async>`
.. contents::
:local:
.. _playbook_language_example:
Playbook syntax
===============
Playbooks are expressed in YAML format with a minimum of syntax. If you are not familiar with YAML, look at our overview of :ref:`yaml_syntax` and consider installing an add-on for your text editor (see :ref:`other_tools_and_programs`) to help you write clean YAML syntax in your playbooks.
A playbook is composed of one or more 'plays' in an ordered list. The terms 'playbook' and 'play' are sports analogies. Each play executes part of the overall goal of the playbook, running one or more tasks. Each task calls an Ansible module.
Playbook execution
==================
A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple 'plays' can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things:
* the managed nodes to target, using a :ref:`pattern <intro_patterns>`
* at least one task to execute
.. note::
In Ansible 2.10 and later, we recommend you use the fully-qualified collection name in your playbooks to ensure the correct module is selected, because multiple collections can contain modules with the same name (for example, ``user``). See :ref:`collections_using_playbook`.
In this example, the first play targets the web servers; the second play targets the database servers.
.. code-block:: yaml
---
- name: Update web servers
hosts: webservers
remote_user: root
tasks:
- name: Ensure apache is at the latest version
ansible.builtin.yum:
name: httpd
state: latest
- name: Write the apache config file
ansible.builtin.template:
src: /srv/httpd.j2
dest: /etc/httpd.conf
- name: Update db servers
hosts: databases
remote_user: root
tasks:
- name: Ensure postgresql is at the latest version
ansible.builtin.yum:
name: postgresql
state: latest
- name: Ensure that postgresql is started
ansible.builtin.service:
name: postgresql
state: started
Your playbook can include more than just a hosts line and tasks. For example, the playbook above sets a ``remote_user`` for each play. This is the user account for the SSH connection. You can add other :ref:`playbook_keywords` at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the :ref:`connection plugin <connection_plugins>`, whether to use :ref:`privilege escalation <become>`, how to handle errors, and more. To support a variety of environments, Ansible lets you set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the :ref:`precedence rules <general_precedence_rules>` for these sources of data will help you as you expand your Ansible ecosystem.
.. _tasks_list:
Task execution
--------------
By default, Ansible executes each task in order, one at a time, against all machines matched by the host pattern. Each task executes a module with specific arguments. When a task has executed on all target machines, Ansible moves on to the next task. You can use :ref:`strategies <playbooks_strategies>` to change this default behavior. Within each play, Ansible applies the same task directives to all hosts. If a task fails on a host, Ansible takes that host out of the rotation for the rest of the playbook.
When you run a playbook, Ansible returns information about connections, the ``name`` lines of all your plays and tasks, whether each task has succeeded or failed on each machine, and whether each task has made a change on each machine. At the bottom of the playbook execution, Ansible provides a summary of the nodes that were targeted and how they performed. General failures and fatal "unreachable" communication attempts are kept separate in the counts.
.. _idempotency:
Desired state and 'idempotency'
-------------------------------
Most Ansible modules check whether the desired final state has already been achieved, and exit without performing any actions if that state has been achieved, so that repeating the task does not change the final state. Modules that behave this way are often called 'idempotent.' Whether you run a playbook once, or multiple times, the outcome should be the same. However, not all playbooks and not all modules behave this way. If you are unsure, test your playbooks in a sandbox environment before running them multiple times in production.
.. _executing_a_playbook:
Running playbooks
-----------------
To run your playbook, use the :ref:`ansible-playbook` command.
.. code-block:: bash
ansible-playbook playbook.yml -f 10
Use the ``--verbose`` flag when running your playbook to see detailed output from successful modules as well as unsuccessful ones.
.. _playbook_ansible-pull:
Ansible-Pull
============
Should you want to invert the architecture of Ansible, so that nodes check in to a central location, instead
of pushing configuration out to them, you can.
The ``ansible-pull`` is a small script that will checkout a repo of configuration instructions from git, and then
run ``ansible-playbook`` against that content.
Assuming you load balance your checkout location, ``ansible-pull`` scales essentially infinitely.
Run ``ansible-pull --help`` for details.
There's also a `clever playbook <https://github.com/ansible/ansible-examples/blob/master/language_features/ansible_pull.yml>`_ available to configure ``ansible-pull`` via a crontab from push mode.
Verifying playbooks
===================
You may want to verify your playbooks to catch syntax errors and other problems before you run them. The :ref:`ansible-playbook` command offers several options for verification, including ``--check``, ``--diff``, ``--list-hosts``, ``--list-tasks``, and ``--syntax-check``. The :ref:`validate-playbook-tools` describes other tools for validating and testing playbooks.
.. _linting_playbooks:
ansible-lint
------------
You can use `ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_ for detailed, Ansible-specific feedback on your playbooks before you execute them. For example, if you run ``ansible-lint`` on the playbook called ``verify-apache.yml`` near the top of this page, you should get the following results:
.. code-block:: bash
$ ansible-lint verify-apache.yml
[403] Package installs should not use latest
verify-apache.yml:8
Task/Handler: ensure apache is at the latest version
The `ansible-lint default rules <https://docs.ansible.com/ansible-lint/rules/default_rules.html>`_ page describes each error. For ``[403]``, the recommended fix is to change ``state: latest`` to ``state: present`` in the playbook.
.. seealso::
`ansible-lint <https://docs.ansible.com/ansible-lint/index.html>`_
Learn how to test Ansible Playbooks syntax
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`tips_and_tricks`
Tips for managing playbooks in the real world
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`developing_modules`
Learn to extend Ansible by writing your own modules
:ref:`intro_patterns`
Learn about how to select hosts
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_
Complete end-to-end playbook examples
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,006 |
Docs: Replace latin terms with english in the playbook_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/playbook_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/playbook_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/playbook_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79006
|
https://github.com/ansible/ansible/pull/79009
|
173ddde126da34f522f80009ceb8bb25b62a5c92
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
| 2022-10-03T20:15:09Z |
python
| 2022-10-04T02:35:15Z |
docs/docsite/rst/playbook_guide/playbooks_loops.rst
|
.. _playbooks_loops:
*****
Loops
*****
Ansible offers the ``loop``, ``with_<lookup>``, and ``until`` keywords to execute a task multiple times. Examples of commonly-used loops include changing ownership on several files and/or directories with the :ref:`file module <file_module>`, creating multiple users with the :ref:`user module <user_module>`, and
repeating a polling step until a certain result is reached.
.. note::
* We added ``loop`` in Ansible 2.5. It is not yet a full replacement for ``with_<lookup>``, but we recommend it for most use cases.
* We have not deprecated the use of ``with_<lookup>`` - that syntax will still be valid for the foreseeable future.
* We are looking to improve ``loop`` syntax - watch this page and the `changelog <https://github.com/ansible/ansible/tree/devel/changelogs>`_ for updates.
.. contents::
:local:
Comparing ``loop`` and ``with_*``
=================================
* The ``with_<lookup>`` keywords rely on :ref:`lookup_plugins` - even ``items`` is a lookup.
* The ``loop`` keyword is equivalent to ``with_list``, and is the best choice for simple loops.
* The ``loop`` keyword will not accept a string as input, see :ref:`query_vs_lookup`.
* Generally speaking, any use of ``with_*`` covered in :ref:`migrating_to_loop` can be updated to use ``loop``.
* Be careful when changing ``with_items`` to ``loop``, as ``with_items`` performed implicit single-level flattening. You may need to use ``flatten(1)`` with ``loop`` to match the exact outcome. For example, to get the same output as:
.. code-block:: yaml
with_items:
- 1
- [2,3]
- 4
you would need
.. code-block:: yaml+jinja
loop: "{{ [1, [2, 3], 4] | flatten(1) }}"
* Any ``with_*`` statement that requires using ``lookup`` within a loop should not be converted to use the ``loop`` keyword. For example, instead of doing:
.. code-block:: yaml+jinja
loop: "{{ lookup('fileglob', '*.txt', wantlist=True) }}"
it's cleaner to keep
.. code-block:: yaml
with_fileglob: '*.txt'
.. _standard_loops:
Standard loops
==============
Iterating over a simple list
----------------------------
Repeated tasks can be written as standard loops over a simple list of strings. You can define the list directly in the task.
.. code-block:: yaml+jinja
- name: Add several users
ansible.builtin.user:
name: "{{ item }}"
state: present
groups: "wheel"
loop:
- testuser1
- testuser2
You can define the list in a variables file, or in the 'vars' section of your play, then refer to the name of the list in the task.
.. code-block:: yaml+jinja
loop: "{{ somelist }}"
Either of these examples would be the equivalent of
.. code-block:: yaml
- name: Add user testuser1
ansible.builtin.user:
name: "testuser1"
state: present
groups: "wheel"
- name: Add user testuser2
ansible.builtin.user:
name: "testuser2"
state: present
groups: "wheel"
You can pass a list directly to a parameter for some plugins. Most of the packaging modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, have this capability. When available, passing the list to a parameter is better than looping over the task. For example
.. code-block:: yaml+jinja
- name: Optimal yum
ansible.builtin.yum:
name: "{{ list_of_packages }}"
state: present
- name: Non-optimal yum, slower and may cause issues with interdependencies
ansible.builtin.yum:
name: "{{ item }}"
state: present
loop: "{{ list_of_packages }}"
Check the :ref:`module documentation <modules_by_category>` to see if you can pass a list to any particular module's parameter(s).
Iterating over a list of hashes
-------------------------------
If you have a list of hashes, you can reference subkeys in a loop. For example:
.. code-block:: yaml+jinja
- name: Add several users
ansible.builtin.user:
name: "{{ item.name }}"
state: present
groups: "{{ item.groups }}"
loop:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }
When combining :ref:`conditionals <playbooks_conditionals>` with a loop, the ``when:`` statement is processed separately for each item.
See :ref:`the_when_statement` for examples.
Iterating over a dictionary
---------------------------
To loop over a dict, use the :ref:`dict2items <dict_filter>`:
.. code-block:: yaml+jinja
- name: Using dict2items
ansible.builtin.debug:
msg: "{{ item.key }} - {{ item.value }}"
loop: "{{ tag_data | dict2items }}"
vars:
tag_data:
Environment: dev
Application: payment
Here, we are iterating over `tag_data` and printing the key and the value from it.
Registering variables with a loop
=================================
You can register the output of a loop as a variable. For example
.. code-block:: yaml+jinja
- name: Register loop output as a variable
ansible.builtin.shell: "echo {{ item }}"
loop:
- "one"
- "two"
register: echo
When you use ``register`` with a loop, the data structure placed in the variable will contain a ``results`` attribute that is a list of all responses from the module. This differs from the data structure returned when using ``register`` without a loop.
.. code-block:: json
{
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": "echo \"one\" ",
"delta": "0:00:00.003110",
"end": "2013-12-19 12:00:05.187153",
"invocation": {
"module_args": "echo \"one\"",
"module_name": "shell"
},
"item": "one",
"rc": 0,
"start": "2013-12-19 12:00:05.184043",
"stderr": "",
"stdout": "one"
},
{
"changed": true,
"cmd": "echo \"two\" ",
"delta": "0:00:00.002920",
"end": "2013-12-19 12:00:05.245502",
"invocation": {
"module_args": "echo \"two\"",
"module_name": "shell"
},
"item": "two",
"rc": 0,
"start": "2013-12-19 12:00:05.242582",
"stderr": "",
"stdout": "two"
}
]
}
Subsequent loops over the registered variable to inspect the results may look like
.. code-block:: yaml+jinja
- name: Fail if return code is not 0
ansible.builtin.fail:
msg: "The command ({{ item.cmd }}) did not have a 0 return code"
when: item.rc != 0
loop: "{{ echo.results }}"
During iteration, the result of the current item will be placed in the variable.
.. code-block:: yaml+jinja
- name: Place the result of the current item in the variable
ansible.builtin.shell: echo "{{ item }}"
loop:
- one
- two
register: echo
changed_when: echo.stdout != "one"
.. _complex_loops:
Complex loops
=============
Iterating over nested lists
---------------------------
You can use Jinja2 expressions to iterate over complex lists. For example, a loop can combine nested lists.
.. code-block:: yaml+jinja
- name: Give users access to multiple databases
community.mysql.mysql_user:
name: "{{ item[0] }}"
priv: "{{ item[1] }}.*:ALL"
append_privs: true
password: "foo"
loop: "{{ ['alice', 'bob'] | product(['clientdb', 'employeedb', 'providerdb']) | list }}"
.. _do_until_loops:
Retrying a task until a condition is met
----------------------------------------
.. versionadded:: 1.4
You can use the ``until`` keyword to retry a task until a certain condition is met. Here's an example:
.. code-block:: yaml
- name: Retry a task until a certain condition is met
ansible.builtin.shell: /usr/bin/foo
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10
This task runs up to 5 times with a delay of 10 seconds between each attempt. If the result of any attempt has "all systems go" in its stdout, the task succeeds. The default value for "retries" is 3 and "delay" is 5.
To see the results of individual retries, run the play with ``-vv``.
When you run a task with ``until`` and register the result as a variable, the registered variable will include a key called "attempts", which records the number of the retries for the task.
.. note:: You must set the ``until`` parameter if you want a task to retry. If ``until`` is not defined, the value for the ``retries`` parameter is forced to 1.
Looping over inventory
----------------------
To loop over your inventory, or just a subset of it, you can use a regular ``loop`` with the ``ansible_play_batch`` or ``groups`` variables.
.. code-block:: yaml+jinja
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ groups['all'] }}"
- name: Show all the hosts in the current play
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ ansible_play_batch }}"
There is also a specific lookup plugin ``inventory_hostnames`` that can be used like this
.. code-block:: yaml+jinja
- name: Show all the hosts in the inventory
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all') }}"
- name: Show all the hosts matching the pattern, ie all but the group www
ansible.builtin.debug:
msg: "{{ item }}"
loop: "{{ query('inventory_hostnames', 'all:!www') }}"
More information on the patterns can be found in :ref:`intro_patterns`.
.. _query_vs_lookup:
Ensuring list input for ``loop``: using ``query`` rather than ``lookup``
========================================================================
The ``loop`` keyword requires a list as input, but the ``lookup`` keyword returns a string of comma-separated values by default. Ansible 2.5 introduced a new Jinja2 function named :ref:`query <query>` that always returns a list, offering a simpler interface and more predictable output from lookup plugins when using the ``loop`` keyword.
You can force ``lookup`` to return a list to ``loop`` by using ``wantlist=True``, or you can use ``query`` instead.
The following two examples do the same thing.
.. code-block:: yaml+jinja
loop: "{{ query('inventory_hostnames', 'all') }}"
loop: "{{ lookup('inventory_hostnames', 'all', wantlist=True) }}"
.. _loop_control:
Adding controls to loops
========================
.. versionadded:: 2.1
The ``loop_control`` keyword lets you manage your loops in useful ways.
Limiting loop output with ``label``
-----------------------------------
.. versionadded:: 2.2
When looping over complex data structures, the console output of your task can be enormous. To limit the displayed output, use the ``label`` directive with ``loop_control``.
.. code-block:: yaml+jinja
- name: Create servers
digital_ocean:
name: "{{ item.name }}"
state: present
loop:
- name: server1
disks: 3gb
ram: 15Gb
network:
nic01: 100Gb
nic02: 10Gb
...
loop_control:
label: "{{ item.name }}"
The output of this task will display just the ``name`` field for each ``item`` instead of the entire contents of the multi-line ``{{ item }}`` variable.
.. note:: This is for making console output more readable, not protecting sensitive data. If there is sensitive data in ``loop``, set ``no_log: yes`` on the task to prevent disclosure.
Pausing within a loop
---------------------
.. versionadded:: 2.2
To control the time (in seconds) between the execution of each item in a task loop, use the ``pause`` directive with ``loop_control``.
.. code-block:: yaml+jinja
# main.yml
- name: Create servers, pause 3s before creating next
community.digitalocean.digital_ocean:
name: "{{ item }}"
state: present
loop:
- server1
- server2
loop_control:
pause: 3
Tracking progress through a loop with ``index_var``
---------------------------------------------------
.. versionadded:: 2.5
To keep track of where you are in a loop, use the ``index_var`` directive with ``loop_control``. This directive specifies a variable name to contain the current loop index.
.. code-block:: yaml+jinja
- name: Count our fruit
ansible.builtin.debug:
msg: "{{ item }} with index {{ my_idx }}"
loop:
- apple
- banana
- pear
loop_control:
index_var: my_idx
.. note:: `index_var` is 0 indexed.
Defining inner and outer variable names with ``loop_var``
---------------------------------------------------------
.. versionadded:: 2.1
You can nest two looping tasks using ``include_tasks``. However, by default Ansible sets the loop variable ``item`` for each loop. This means the inner, nested loop will overwrite the value of ``item`` from the outer loop.
You can specify the name of the variable for each loop using ``loop_var`` with ``loop_control``.
.. code-block:: yaml+jinja
# main.yml
- include_tasks: inner.yml
loop:
- 1
- 2
- 3
loop_control:
loop_var: outer_item
# inner.yml
- name: Print outer and inner items
ansible.builtin.debug:
msg: "outer item={{ outer_item }} inner item={{ item }}"
loop:
- a
- b
- c
.. note:: If Ansible detects that the current loop is using a variable which has already been defined, it will raise an error to fail the task.
Extended loop variables
-----------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get extended loop information using the ``extended`` option to loop control. This option will expose the following information.
========================== ===========
Variable Description
-------------------------- -----------
``ansible_loop.allitems`` The list of all items in the loop
``ansible_loop.index`` The current iteration of the loop. (1 indexed)
``ansible_loop.index0`` The current iteration of the loop. (0 indexed)
``ansible_loop.revindex`` The number of iterations from the end of the loop (1 indexed)
``ansible_loop.revindex0`` The number of iterations from the end of the loop (0 indexed)
``ansible_loop.first`` ``True`` if first iteration
``ansible_loop.last`` ``True`` if last iteration
``ansible_loop.length`` The number of items in the loop
``ansible_loop.previtem`` The item from the previous iteration of the loop. Undefined during the first iteration.
``ansible_loop.nextitem`` The item from the following iteration of the loop. Undefined during the last iteration.
========================== ===========
::
loop_control:
extended: true
.. note:: When using ``loop_control.extended`` more memory will be utilized on the control node. This is a result of ``ansible_loop.allitems`` containing a reference to the full loop data for every loop. When serializing the results for display in callback plugins within the main ansible process, these references may be dereferenced causing memory usage to increase.
.. versionadded:: 2.14
To disable the ``ansible_loop.allitems`` item, to reduce memory consumption, set ``loop_control.extended_allitems: no``.
::
loop_control:
extended: true
extended_allitems: false
Accessing the name of your loop_var
-----------------------------------
.. versionadded:: 2.8
As of Ansible 2.8 you can get the name of the value provided to ``loop_control.loop_var`` using the ``ansible_loop_var`` variable
For role authors, writing roles that allow loops, instead of dictating the required ``loop_var`` value, you can gather the value via the following
.. code-block:: yaml+jinja
"{{ lookup('vars', ansible_loop_var) }}"
.. _migrating_to_loop:
Migrating from with_X to loop
=============================
.. include:: shared_snippets/with2loop.txt
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`playbooks_reuse_roles`
Playbook organization by roles
:ref:`tips_and_tricks`
Tips and tricks for playbooks
:ref:`playbooks_conditionals`
Conditional statements in playbooks
:ref:`playbooks_variables`
All about variables
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,006 |
Docs: Replace latin terms with english in the playbook_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/playbook_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/playbook_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/playbook_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79006
|
https://github.com/ansible/ansible/pull/79009
|
173ddde126da34f522f80009ceb8bb25b62a5c92
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
| 2022-10-03T20:15:09Z |
python
| 2022-10-04T02:35:15Z |
docs/docsite/rst/playbook_guide/playbooks_python_version.rst
|
.. _pb-py-compat:
********************
Python3 in templates
********************
Ansible uses Jinja2 to take advantage of Python data types and standard functions in templates and variables.
You can use these data types and standard functions to perform a rich set of operations on your data. However,
if you use templates, you must be aware of differences between Python versions.
These topics help you design templates that work on both Python2 and Python3. They might also help if you are upgrading from Python2 to Python3. Upgrading within Python2 or Python3 does not usually introduce changes that affect Jinja2 templates.
.. _pb-py-compat-dict-views:
Dictionary views
================
In Python2, the :meth:`dict.keys`, :meth:`dict.values`, and :meth:`dict.items`
methods return a list. Jinja2 returns that to Ansible via a string
representation that Ansible can turn back into a list.
In Python3, those methods return a :ref:`dictionary view <python3:dict-views>` object. The
string representation that Jinja2 returns for dictionary views cannot be parsed back
into a list by Ansible. It is, however, easy to make this portable by
using the :func:`list <jinja2:jinja-filters.list>` filter whenever using :meth:`dict.keys`,
:meth:`dict.values`, or :meth:`dict.items`.
.. code-block:: yaml+jinja
vars:
hosts:
testhost1: 127.0.0.2
testhost2: 127.0.0.3
tasks:
- debug:
msg: '{{ item }}'
# Only works with Python 2
#loop: "{{ hosts.keys() }}"
# Works with both Python 2 and Python 3
loop: "{{ hosts.keys() | list }}"
.. _pb-py-compat-iteritems:
dict.iteritems()
================
Python2 dictionaries have :meth:`~dict.iterkeys`, :meth:`~dict.itervalues`, and :meth:`~dict.iteritems` methods.
Python3 dictionaries do not have these methods. Use :meth:`dict.keys`, :meth:`dict.values`, and :meth:`dict.items` to make your playbooks and templates compatible with both Python2 and Python3.
.. code-block:: yaml+jinja
vars:
hosts:
testhost1: 127.0.0.2
testhost2: 127.0.0.3
tasks:
- debug:
msg: '{{ item }}'
# Only works with Python 2
#loop: "{{ hosts.iteritems() }}"
# Works with both Python 2 and Python 3
loop: "{{ hosts.items() | list }}"
.. seealso::
* The :ref:`pb-py-compat-dict-views` entry for information on
why the :func:`list filter <jinja2:jinja-filters.list>` is necessary
here.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,006 |
Docs: Replace latin terms with english in the playbook_guide directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/playbook_guide/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/playbook_guide/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/playbook_guide/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79006
|
https://github.com/ansible/ansible/pull/79009
|
173ddde126da34f522f80009ceb8bb25b62a5c92
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
| 2022-10-03T20:15:09Z |
python
| 2022-10-04T02:35:15Z |
docs/docsite/rst/playbook_guide/playbooks_reuse.rst
|
.. _playbooks_reuse:
**************************
Re-using Ansible artifacts
**************************
You can write a simple playbook in one very large file, and most users learn the one-file approach first. However, breaking your automation work up into smaller files is an excellent way to organize complex sets of tasks and reuse them. Smaller, more distributed artifacts let you re-use the same variables, tasks, and plays in multiple playbooks to address different use cases. You can use distributed artifacts across multiple parent playbooks or even multiple times within one playbook. For example, you might want to update your customer database as part of several different playbooks. If you put all the tasks related to updating your database in a tasks file or a role, you can re-use them in many playbooks while only maintaining them in one place.
.. contents::
:local:
Creating re-usable files and roles
==================================
Ansible offers four distributed, re-usable artifacts: variables files, task files, playbooks, and roles.
- A variables file contains only variables.
- A task file contains only tasks.
- A playbook contains at least one play, and may contain variables, tasks, and other content. You can re-use tightly focused playbooks, but you can only re-use them statically, not dynamically.
- A role contains a set of related tasks, variables, defaults, handlers, and even modules or other plugins in a defined file-tree. Unlike variables files, task files, or playbooks, roles can be easily uploaded and shared via Ansible Galaxy. See :ref:`playbooks_reuse_roles` for details about creating and using roles.
.. versionadded:: 2.4
Re-using playbooks
==================
You can incorporate multiple playbooks into a main playbook. However, you can only use imports to re-use playbooks. For example:
.. code-block:: yaml
- import_playbook: webservers.yml
- import_playbook: databases.yml
Importing incorporates playbooks in other playbooks statically. Ansible runs the plays and tasks in each imported playbook in the order they are listed, just as if they had been defined directly in the main playbook.
You can select which playbook you want to import at runtime by defining your imported playbook filename with a variable, then passing the variable with either ``--extra-vars`` or the ``vars`` keyword. For example:
.. code-block:: yaml
- import_playbook: "/path/to/{{ import_from_extra_var }}"
- import_playbook: "{{ import_from_vars }}"
vars:
import_from_vars: /path/to/one_playbook.yml
If you run this playbook with ``ansible-playbook my_playbook -e import_from_extra_var=other_playbook.yml``, Ansible imports both one_playbook.yml and other_playbook.yml.
When to turn a playbook into a role
===================================
For some use cases, simple playbooks work well. However, starting at a certain level of complexity, roles work better than playbooks. A role lets you store your defaults, handlers, variables, and tasks in separate directories, instead of in a single long document. Roles are easy to share on Ansible Galaxy. For complex use cases, most users find roles easier to read, understand, and maintain than all-in-one playbooks.
Re-using files and roles
========================
Ansible offers two ways to re-use files and roles in a playbook: dynamic and static.
- For dynamic re-use, add an ``include_*`` task in the tasks section of a play:
- :ref:`include_role <include_role_module>`
- :ref:`include_tasks <include_tasks_module>`
- :ref:`include_vars <include_vars_module>`
- For static re-use, add an ``import_*`` task in the tasks section of a play:
- :ref:`import_role <import_role_module>`
- :ref:`import_tasks <import_tasks_module>`
Task include and import statements can be used at arbitrary depth.
You can still use the bare :ref:`roles <roles_keyword>` keyword at the play level to incorporate a role in a playbook statically. However, the bare :ref:`include <include_module>` keyword, once used for both task files and playbook-level includes, is now deprecated.
Includes: dynamic re-use
------------------------
Including roles, tasks, or variables adds them to a playbook dynamically. Ansible processes included files and roles as they come up in a playbook, so included tasks can be affected by the results of earlier tasks within the top-level playbook. Included roles and tasks are similar to handlers - they may or may not run, depending on the results of other tasks in the top-level playbook.
The primary advantage of using ``include_*`` statements is looping. When a loop is used with an include, the included tasks or role will be executed once for each item in the loop.
The filenames for included roles, tasks, and vars are templated before inclusion.
You can pass variables into includes. See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
Imports: static re-use
----------------------
Importing roles, tasks, or playbooks adds them to a playbook statically. Ansible pre-processes imported files and roles before it runs any tasks in a playbook, so imported content is never affected by other tasks within the top-level playbook.
The filenames for imported roles and tasks support templating, but the variables must be available when Ansible is pre-processing the imports. This can be done with the ``vars`` keyword or by using ``--extra-vars``.
You can pass variables to imports. You must pass variables if you want to run an imported file more than once in a playbook. For example:
.. code-block:: yaml
tasks:
- import_tasks: wordpress.yml
vars:
wp_user: timmy
- import_tasks: wordpress.yml
vars:
wp_user: alice
- import_tasks: wordpress.yml
vars:
wp_user: bob
See :ref:`ansible_variable_precedence` for more details on variable inheritance and precedence.
.. _dynamic_vs_static:
Comparing includes and imports: dynamic and static re-use
------------------------------------------------------------
Each approach to re-using distributed Ansible artifacts has advantages and limitations. You may choose dynamic re-use for some playbooks and static re-use for others. Although you can use both dynamic and static re-use in a single playbook, it is best to select one approach per playbook. Mixing static and dynamic re-use can introduce difficult-to-diagnose bugs into your playbooks. This table summarizes the main differences so you can choose the best approach for each playbook you create.
.. table::
:class: documentation-table
========================= ======================================== ========================================
.. Include_* Import_*
========================= ======================================== ========================================
Type of re-use Dynamic Static
When processed At runtime, when encountered Pre-processed during playbook parsing
Task or play All includes are tasks ``import_playbook`` cannot be a task
Task options Apply only to include task itself Apply to all child tasks in import
Calling from loops Executed once for each loop item Cannot be used in a loop
Using ``--list-tags`` Tags within includes not listed All tags appear with ``--list-tags``
Using ``--list-tasks`` Tasks within includes not listed All tasks appear with ``--list-tasks``
Notifying handlers Cannot trigger handlers within includes Can trigger individual imported handlers
Using ``--start-at-task`` Cannot start at tasks within includes Can start at imported tasks
Using inventory variables Can ``include_*: {{ inventory_var }}`` Cannot ``import_*: {{ inventory_var }}``
With playbooks No ``include_playbook`` Can import full playbooks
With variables files Can include variables files Use ``vars_files:`` to import variables
========================= ======================================== ========================================
.. note::
* There are also big differences in resource consumption and performance, imports are quite lean and fast, while includes require a lot of management
and accounting.
Re-using tasks as handlers
==========================
You can also use includes and imports in the :ref:`handlers` section of a playbook. For instance, if you want to define how to restart Apache, you only have to do that once for all of your playbooks. You might make a ``restarts.yml`` file that looks like:
.. code-block:: yaml
# restarts.yml
- name: Restart apache
ansible.builtin.service:
name: apache
state: restarted
- name: Restart mysql
ansible.builtin.service:
name: mysql
state: restarted
You can trigger handlers from either an import or an include, but the procedure is different for each method of re-use. If you include the file, you must notify the include itself, which triggers all the tasks in ``restarts.yml``. If you import the file, you must notify the individual task(s) within ``restarts.yml``. You can mix direct tasks and handlers with included or imported tasks and handlers.
Triggering included (dynamic) handlers
--------------------------------------
Includes are executed at run-time, so the name of the include exists during play execution, but the included tasks do not exist until the include itself is triggered. To use the ``Restart apache`` task with dynamic re-use, refer to the name of the include itself. This approach triggers all tasks in the included file as handlers. For example, with the task file shown above:
.. code-block:: yaml
- name: Trigger an included (dynamic) handler
hosts: localhost
handlers:
- name: Restart services
include_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart services
Triggering imported (static) handlers
-------------------------------------
Imports are processed before the play begins, so the name of the import no longer exists during play execution, but the names of the individual imported tasks do exist. To use the ``Restart apache`` task with static re-use, refer to the name of each task or tasks within the imported file. For example, with the task file shown above:
.. code-block:: yaml
- name: Trigger an imported (static) handler
hosts: localhost
handlers:
- name: Restart services
import_tasks: restarts.yml
tasks:
- command: "true"
notify: Restart apache
- command: "true"
notify: Restart mysql
.. seealso::
:ref:`utilities_modules`
Documentation of the ``include*`` and ``import*`` modules discussed here.
:ref:`working_with_playbooks`
Review the basic Playbook language features
:ref:`playbooks_variables`
All about variables in playbooks
:ref:`playbooks_conditionals`
Conditionals in playbooks
:ref:`playbooks_loops`
Loops in playbooks
:ref:`tips_and_tricks`
Tips and tricks for playbooks
:ref:`ansible_galaxy`
How to share roles on galaxy, role management
`GitHub Ansible examples <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the GitHub project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,999 |
Docs: replace Latin terms in network (not platform) files
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/network/ directory that are not `platform*` files.
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus --exclude=platform* ` in the docs/docsite/rst/network/ directory to find these.
List of all effected files are in a follow-on comment. NOTE: these are NOT the platform_* files i to limit the scope of the PR that fixes these.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/network/user_guide/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78999
|
https://github.com/ansible/ansible/pull/79013
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
|
8d665a1a8ef513913fe4d9cf5a2cd107991780a4
| 2022-10-03T19:49:08Z |
python
| 2022-10-04T08:47:24Z |
docs/docsite/rst/network/getting_started/first_inventory.rst
|
***********************************************
Build Your Inventory
***********************************************
Running a playbook without an inventory requires several command-line flags. Also, running a playbook against a single device is not a huge efficiency gain over making the same change manually. The next step to harnessing the full power of Ansible is to use an inventory file to organize your managed nodes into groups with information like the ``ansible_network_os`` and the SSH user. A fully-featured inventory file can serve as the source of truth for your network. Using an inventory file, a single playbook can maintain hundreds of network devices with a single command. This page shows you how to build an inventory file, step by step.
.. contents::
:local:
Basic inventory
==================================================
First, group your inventory logically. Best practice is to group servers and network devices by their What (application, stack or microservice), Where (datacenter or region), and When (development stage):
- **What**: db, web, leaf, spine
- **Where**: east, west, floor_19, building_A
- **When**: dev, test, staging, prod
Avoid spaces, hyphens, and preceding numbers (use ``floor_19``, not ``19th_floor``) in your group names. Group names are case sensitive.
This tiny example data center illustrates a basic group structure. You can group groups using the syntax ``[metagroupname:children]`` and listing groups as members of the metagroup. Here, the group ``network`` includes all leafs and all spines; the group ``datacenter`` includes all network devices plus all webservers.
.. code-block:: yaml
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
datacenter:
children:
network:
webservers:
You can also create this same inventory in INI format.
.. code-block:: ini
[leafs]
leaf01
leaf02
[spines]
spine01
spine02
[network:children]
leafs
spines
[webservers]
webserver01
webserver02
[datacenter:children]
network
webservers
Add variables to the inventory
================================================================================
Next, you can set values for many of the variables you needed in your first Ansible command in the inventory, so you can skip them in the ``ansible-playbook`` command. In this example, the inventory includes each network device's IP, OS, and SSH user. If your network devices are only accessible by IP, you must add the IP to the inventory file. If you access your network devices using hostnames, the IP is not necessary.
.. code-block:: yaml
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
leaf02:
ansible_host: 10.16.10.12
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spine02:
ansible_host: 10.16.10.14
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
ansible_user: my_server_user
webserver02:
ansible_host: 10.16.10.16
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
Group variables within inventory
================================================================================
When devices in a group share the same variable values, such as OS or SSH user, you can reduce duplication and simplify maintenance by consolidating these into group variables:
.. code-block:: yaml
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
vars:
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
network:
children:
leafs:
spines:
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
Variable syntax
================================================================================
The syntax for variable values is different in inventory, in playbooks, and in the ``group_vars`` files, which are covered below. Even though playbook and ``group_vars`` files are both written in YAML, you use variables differently in each.
- In an ini-style inventory file you **must** use the syntax ``key=value`` for variable values: ``ansible_network_os=vyos.vyos.vyos``.
- In any file with the ``.yml`` or ``.yaml`` extension, including playbooks and ``group_vars`` files, you **must** use YAML syntax: ``key: value``.
- In ``group_vars`` files, use the full ``key`` name: ``ansible_network_os: vyos.vyos.vyos``.
- In playbooks, use the short-form ``key`` name, which drops the ``ansible`` prefix: ``network_os: vyos.vyos.vyos``.
Group inventory by platform
================================================================================
As your inventory grows, you may want to group devices by platform. This allows you to specify platform-specific variables easily for all devices on that platform:
.. code-block:: yaml
---
leafs:
hosts:
leaf01:
ansible_host: 10.16.10.11
leaf02:
ansible_host: 10.16.10.12
spines:
hosts:
spine01:
ansible_host: 10.16.10.13
spine02:
ansible_host: 10.16.10.14
network:
children:
leafs:
spines:
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
webservers:
hosts:
webserver01:
ansible_host: 10.16.10.15
webserver02:
ansible_host: 10.16.10.16
vars:
ansible_user: my_server_user
datacenter:
children:
network:
webservers:
With this setup, you can run ``first_playbook.yml`` with only two flags:
.. code-block:: console
ansible-playbook -i inventory.yml -k first_playbook.yml
With the ``-k`` flag, you provide the SSH password(s) at the prompt. Alternatively, you can store SSH and other secrets and passwords securely in your group_vars files with ``ansible-vault``. See :ref:`network_vault` for details.
Verifying the inventory
=========================
You can use the :ref:`ansible-inventory` CLI command to display the inventory as Ansible sees it.
.. code-block:: console
$ ansible-inventory -i test.yml --list
{
"_meta": {
"hostvars": {
"leaf01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.11",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"leaf02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.12",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine01": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.13",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"spine02": {
"ansible_connection": "ansible.netcommon.network_cli",
"ansible_host": "10.16.10.14",
"ansible_network_os": "vyos.vyos.vyos",
"ansible_user": "my_vyos_user"
},
"webserver01": {
"ansible_host": "10.16.10.15",
"ansible_user": "my_server_user"
},
"webserver02": {
"ansible_host": "10.16.10.16",
"ansible_user": "my_server_user"
}
}
},
"all": {
"children": [
"datacenter",
"ungrouped"
]
},
"datacenter": {
"children": [
"network",
"webservers"
]
},
"leafs": {
"hosts": [
"leaf01",
"leaf02"
]
},
"network": {
"children": [
"leafs",
"spines"
]
},
"spines": {
"hosts": [
"spine01",
"spine02"
]
},
"webservers": {
"hosts": [
"webserver01",
"webserver02"
]
}
}
.. _network_vault:
Protecting sensitive variables with ``ansible-vault``
================================================================================
The ``ansible-vault`` command provides encryption for files and/or individual variables like passwords. This tutorial will show you how to encrypt a single SSH password. You can use the commands below to encrypt other sensitive information, such as database passwords, privilege-escalation passwords and more.
First you must create a password for ansible-vault itself. It is used as the encryption key, and with this you can encrypt dozens of different passwords across your Ansible project. You can access all those secrets (encrypted values) with a single password (the ansible-vault password) when you run your playbooks. Here's a simple example.
1. Create a file and write your password for ansible-vault to it:
.. code-block:: console
echo "my-ansible-vault-pw" > ~/my-ansible-vault-pw-file
2. Create the encrypted ssh password for your VyOS network devices, pulling your ansible-vault password from the file you just created:
.. code-block:: console
ansible-vault encrypt_string --vault-id my_user@~/my-ansible-vault-pw-file 'VyOS_SSH_password' --name 'ansible_password'
If you prefer to type your ansible-vault password rather than store it in a file, you can request a prompt:
.. code-block:: console
ansible-vault encrypt_string --vault-id my_user@prompt 'VyOS_SSH_password' --name 'ansible_password'
and type in the vault password for ``my_user``.
The :option:`--vault-id <ansible-playbook --vault-id>` flag allows different vault passwords for different users or different levels of access. The output includes the user name ``my_user`` from your ``ansible-vault`` command and uses the YAML syntax ``key: value``:
.. code-block:: yaml
ansible_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;my_user
66386134653765386232383236303063623663343437643766386435663632343266393064373933
3661666132363339303639353538316662616638356631650a316338316663666439383138353032
63393934343937373637306162366265383461316334383132626462656463363630613832313562
3837646266663835640a313164343535316666653031353763613037656362613535633538386539
65656439626166666363323435613131643066353762333232326232323565376635
Encryption successful
This is an example using an extract from a YAML inventory, as the INI format does not support inline vaults:
.. code-block:: yaml
...
vyos: # this is a group in yaml inventory, but you can also do under a host
vars:
ansible_connection: ansible.netcommon.network_cli
ansible_network_os: vyos.vyos.vyos
ansible_user: my_vyos_user
ansible_password: !vault |
$ANSIBLE_VAULT;1.2;AES256;my_user
66386134653765386232383236303063623663343437643766386435663632343266393064373933
3661666132363339303639353538316662616638356631650a316338316663666439383138353032
63393934343937373637306162366265383461316334383132626462656463363630613832313562
3837646266663835640a313164343535316666653031353763613037656362613535633538386539
65656439626166666363323435613131643066353762333232326232323565376635
...
To use an inline vaulted variables with an INI inventory you need to store it in a 'vars' file in YAML format,
it can reside in host_vars/ or group_vars/ to be automatically picked up or referenced from a play via ``vars_files`` or ``include_vars``.
To run a playbook with this setup, drop the ``-k`` flag and add a flag for your ``vault-id``:
.. code-block:: console
ansible-playbook -i inventory --vault-id my_user@~/my-ansible-vault-pw-file first_playbook.yml
Or with a prompt instead of the vault password file:
.. code-block:: console
ansible-playbook -i inventory --vault-id my_user@prompt first_playbook.yml
To see the original value, you can use the debug module. Please note if your YAML file defines the `ansible_connection` variable (as we used in our example), it will take effect when you execute the command below. To prevent this, please make a copy of the file without the ansible_connection variable.
.. code-block:: console
cat vyos.yml | grep -v ansible_connection >> vyos_no_connection.yml
ansible localhost -m debug -a var="ansible_password" -e "@vyos_no_connection.yml" --ask-vault-pass
Vault password:
localhost | SUCCESS => {
"ansible_password": "VyOS_SSH_password"
}
.. warning::
Vault content can only be decrypted with the password that was used to encrypt it. If you want to stop using one password and move to a new one, you can update and re-encrypt existing vault content with ``ansible-vault rekey myfile``, then provide the old password and the new password. Copies of vault content still encrypted with the old password can still be decrypted with old password.
For more details on building inventory files, see :ref:`the introduction to inventory<intro_inventory>`; for more details on ansible-vault, see :ref:`the full Ansible Vault documentation<vault>`.
Now that you understand the basics of commands, playbooks, and inventory, it's time to explore some more complex Ansible Network examples.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,999 |
Docs: replace Latin terms in network (not platform) files
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/network/ directory that are not `platform*` files.
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus --exclude=platform* ` in the docs/docsite/rst/network/ directory to find these.
List of all effected files are in a follow-on comment. NOTE: these are NOT the platform_* files i to limit the scope of the PR that fixes these.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/network/user_guide/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78999
|
https://github.com/ansible/ansible/pull/79013
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
|
8d665a1a8ef513913fe4d9cf5a2cd107991780a4
| 2022-10-03T19:49:08Z |
python
| 2022-10-04T08:47:24Z |
docs/docsite/rst/network/getting_started/index.rst
|
.. _network_getting_started:
**********************************
Network Getting Started
**********************************
Ansible collections support a wide range of vendors, device types, and actions, so you can manage your entire network with a single automation tool. With Ansible, you can:
- Automate repetitive tasks to speed routine network changes and free up your time for more strategic work
- Leverage the same simple, powerful, and agentless automation tool for network tasks that operations and development use
- Separate the data model (in a playbook or role) from the execution layer (via Ansible modules) to manage heterogeneous network devices
- Benefit from community and vendor-generated sample playbooks and roles to help accelerate network automation projects
- Communicate securely with network hardware over SSH or HTTPS
**Who should use this guide?**
This guide is intended for network engineers using Ansible for the first time. If you understand networks but have never used Ansible, work through the guide from start to finish.
This guide is also useful for experienced Ansible users automating network tasks for the first time. You can use Ansible commands, playbooks and modules to configure hubs, switches, routers, bridges and other network devices. But network modules are different from Linux/Unix and Windows modules, and you must understand some network-specific concepts to succeed. If you understand Ansible but have never automated a network task, start with the second section.
This guide introduces basic Ansible concepts and guides you through your first Ansible commands, playbooks and inventory entries.
.. toctree::
:maxdepth: 2
:caption: Getting Started Guide
basic_concepts
network_differences
first_playbook
first_inventory
network_roles
intermediate_concepts
network_connection_options
network_resources
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,999 |
Docs: replace Latin terms in network (not platform) files
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/network/ directory that are not `platform*` files.
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus --exclude=platform* ` in the docs/docsite/rst/network/ directory to find these.
List of all effected files are in a follow-on comment. NOTE: these are NOT the platform_* files i to limit the scope of the PR that fixes these.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/network/user_guide/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78999
|
https://github.com/ansible/ansible/pull/79013
|
f0cc70f9e1d4991957f3a55eb9ef1c5617e4cd2b
|
8d665a1a8ef513913fe4d9cf5a2cd107991780a4
| 2022-10-03T19:49:08Z |
python
| 2022-10-04T08:47:24Z |
docs/docsite/rst/network/user_guide/network_debug_troubleshooting.rst
|
.. _network_debug_troubleshooting:
***************************************
Network Debug and Troubleshooting Guide
***************************************
This section discusses how to debug and troubleshoot network modules in Ansible.
.. contents::
:local:
How to troubleshoot
===================
Ansible network automation errors generally fall into one of the following categories:
:Authentication issues:
* Not correctly specifying credentials
* Remote device (network switch/router) not falling back to other other authentication methods
* SSH key issues
:Timeout issues:
* Can occur when trying to pull a large amount of data
* May actually be masking a authentication issue
:Playbook issues:
* Use of ``delegate_to``, instead of ``ProxyCommand``. See :ref:`network proxy guide <network_delegate_to_vs_ProxyCommand>` for more information.
.. warning:: ``unable to open shell``
The ``unable to open shell`` message means that the ``ansible-connection`` daemon has not been able to successfully
talk to the remote network device. This generally means that there is an authentication issue. See the "Authentication and connection issues" section
in this document for more information.
.. _enable_network_logging:
Enabling Networking logging and how to read the logfile
-------------------------------------------------------
**Platforms:** Any
Ansible includes logging to help diagnose and troubleshoot issues regarding Ansible Networking modules.
Because logging is very verbose, it is disabled by default. It can be enabled with the :envvar:`ANSIBLE_LOG_PATH` and :envvar:`ANSIBLE_DEBUG` options on the ansible-controller, that is the machine running ``ansible-playbook``.
Before running ``ansible-playbook``, run the following commands to enable logging:
.. code:: shell
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with 4*v for connection level verbosity
ansible-playbook -vvvv ...
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller:
.. code::
less $ANSIBLE_LOG_PATH
2017-03-30 13:19:52,740 p=28990 u=fred | creating new control socket for host veos01:22 as user admin
2017-03-30 13:19:52,741 p=28990 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-03-30 13:19:52,741 p=28990 u=fred | current working directory is /home/fred/ansible/test/integration
2017-03-30 13:19:52,741 p=28990 u=fred | using connection plugin network_cli
...
2017-03-30 13:20:14,771 paramiko.transport userauth is OK
2017-03-30 13:20:15,283 paramiko.transport Authentication (keyboard-interactive) successful!
2017-03-30 13:20:15,302 p=28990 u=fred | ssh connection done, setting terminal
2017-03-30 13:20:15,321 p=28990 u=fred | ssh connection has completed successfully
2017-03-30 13:20:15,322 p=28990 u=fred | connection established to veos01 in 0:00:22.580626
From the log notice:
* ``p=28990`` Is the PID (Process ID) of the ``ansible-connection`` process
* ``u=fred`` Is the user `running` ansible, not the remote-user you are attempting to connect as
* ``creating new control socket for host veos01:22 as user admin`` host:port as user
* ``control socket path is`` location on disk where the persistent connection socket is created
* ``using connection plugin network_cli`` Informs you that persistent connection is being used
* ``connection established to veos01 in 0:00:22.580626`` Time taken to obtain a shell on the remote device
.. note:: Port None ``creating new control socket for host veos01:None``
If the log reports the port as ``None`` this means that the default port is being used.
A future Ansible release will improve this message so that the port is always logged.
Because the log files are verbose, you can use grep to look for specific information. For example, once you have identified the ``pid`` from the ``creating new control socket for host`` line you can search for other connection log entries::
grep "p=28990" $ANSIBLE_LOG_PATH
Enabling Networking device interaction logging
----------------------------------------------
**Platforms:** Any
Ansible includes logging of device interaction in the log file to help diagnose and troubleshoot
issues regarding Ansible Networking modules. The messages are logged in the file pointed to by the ``log_path`` configuration
option in the Ansible configuration file or by setting the :envvar:`ANSIBLE_LOG_PATH`.
.. warning::
The device interaction messages consist of command executed on the target device and the returned response. Since this
log data can contain sensitive information including passwords in plain text it is disabled by default.
Additionally, in order to prevent accidental leakage of data, a warning will be shown on every task with this
setting enabled, specifying which host has it enabled and where the data is being logged.
Be sure to fully understand the security implications of enabling this option. The device interaction logging can be enabled either globally by setting in configuration file or by setting environment or enabled on per task basis by passing a special variable to the task.
Before running ``ansible-playbook`` run the following commands to enable logging:
.. code-block:: text
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
Enable device interaction logging for a given task
.. code-block:: yaml
- name: get version information
cisco.ios.ios_command:
commands:
- show version
vars:
ansible_persistent_log_messages: True
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
log_messages = True
or enable the environment variable `ANSIBLE_PERSISTENT_LOG_MESSAGES`:
.. code-block:: text
# Enable device interaction logging
export ANSIBLE_PERSISTENT_LOG_MESSAGES=True
If the task is failing on connection initialization itself, you should enable this option
globally. If an individual task is failing intermittently this option can be enabled for that task itself to find the root cause.
After Ansible has finished running you can inspect the log file which has been created on the ansible-controller
.. note:: Be sure to fully understand the security implications of enabling this option as it can log sensitive
information in log file thus creating security vulnerability.
Isolating an error
------------------
**Platforms:** Any
As with any effort to troubleshoot it's important to simplify the test case as much as possible.
For Ansible this can be done by ensuring you are only running against one remote device:
* Using ``ansible-playbook --limit switch1.example.net...``
* Using an ad hoc ``ansible`` command
`ad hoc` refers to running Ansible to perform some quick command using ``/usr/bin/ansible``, rather than the orchestration language, which is ``/usr/bin/ansible-playbook``. In this case we can ensure connectivity by attempting to execute a single command on the remote device::
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
In the above example, we:
* connect to ``switch1.example.net`` specified in the inventory file ``inventory``
* use the module ``arista.eos.eos_command``
* run the command ``?``
* connect using the username ``admin``
* inform the ``ansible`` command to prompt for the SSH password by specifying ``-k``
If you have SSH keys configured correctly, you don't need to specify the ``-k`` parameter.
If the connection still fails you can combine it with the enable_network_logging parameter. For example:
.. code-block:: text
# Specify the location for the log file
export ANSIBLE_LOG_PATH=~/ansible.log
# Enable Debug
export ANSIBLE_DEBUG=True
# Run with ``-vvvv`` for connection level verbosity
ansible -m arista.eos.eos_command -a 'commands=?' -i inventory switch1.example.net -e 'ansible_connection=ansible.netcommon.network_cli' -u admin -k
Then review the log file and find the relevant error message in the rest of this document.
.. For details on other ways to authenticate, see LINKTOAUTHHOWTODOCS.
.. _socket_path_issue:
Troubleshooting socket path issues
==================================
**Platforms:** Any
The ``Socket path does not exist or cannot be found`` and ``Unable to connect to socket`` messages indicate that the socket used to communicate with the remote network device is unavailable or does not exist.
For example:
.. code-block:: none
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 115, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Socket path XX does not exist or cannot be found. See Troubleshooting socket path issues in the Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
or
.. code-block:: none
fatal: [spine02]: FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Traceback (most recent call last):\n File \"/tmp/ansible_TSqk5J/ansible_modlib.zip/ansible/module_utils/connection.py\", line 123, in _exec_jsonrpc\nansible.module_utils.connection.ConnectionError: Unable to connect to socket XX. See Troubleshooting socket path issues in Network Debug and Troubleshooting Guide\n",
"module_stdout": "",
"msg": "MODULE FAILURE",
"rc": 1
}
Suggestions to resolve:
#. Verify that you have write access to the socket path described in the error message.
#. Follow the steps detailed in :ref:`enable network logging <enable_network_logging>`.
If the identified error message from the log file is:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
or
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
Follow the steps detailed in :ref:`timeout issues <timeout_issues>`
.. _unable_to_open_shell:
Category "Unable to open shell"
===============================
**Platforms:** Any
The ``unable to open shell`` message means that the ``ansible-connection`` daemon has not been able to successfully talk to the remote network device. This generally means that there is an authentication issue. It is a "catch all" message, meaning you need to enable :ref:`logging <a_note_about_logging>` to find the underlying issues.
For example:
.. code-block:: none
TASK [prepare_eos_tests : enable cli on remote device] **************************************************
fatal: [veos01]: FAILED! => {"changed": false, "failed": true, "msg": "unable to open shell"}
or:
.. code-block:: none
TASK [ios_system : configure name_servers] *************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to open shell",
}
Suggestions to resolve:
Follow the steps detailed in enable_network_logging_.
Once you've identified the error message from the log file, the specific solution can be found in the rest of this document.
Error: "[Errno -2] Name or service not known"
---------------------------------------------
**Platforms:** Any
Indicates that the remote host you are trying to connect to can not be reached
For example:
.. code-block:: yaml
2017-04-04 11:39:48,147 p=15299 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 11:39:48,147 p=15299 u=fred | current working directory is /home/fred/git/ansible-inc/stable-2.3/test/integration
2017-04-04 11:39:48,147 p=15299 u=fred | using connection plugin network_cli
2017-04-04 11:39:48,340 p=15299 u=fred | connecting to host veos01 returned an error
2017-04-04 11:39:48,340 p=15299 u=fred | [Errno -2] Name or service not known
Suggestions to resolve:
* If you are using the ``provider:`` options ensure that its suboption ``host:`` is set correctly.
* If you are not using ``provider:`` nor top-level arguments ensure your inventory file is correct.
Error: "Authentication failed"
------------------------------
**Platforms:** Any
Occurs if the credentials (username, passwords, or ssh keys) passed to ``ansible-connection`` (via ``ansible`` or ``ansible-playbook``) can not be used to connect to the remote device.
For example:
.. code-block:: yaml
<ios01> ESTABLISH CONNECTION FOR USER: cisco on PORT 22 TO ios01
<ios01> Authentication failed.
Suggestions to resolve:
If you are specifying credentials via ``password:`` (either directly or via ``provider:``) or the environment variable `ANSIBLE_NET_PASSWORD` it is possible that ``paramiko`` (the Python SSH library that Ansible uses) is using ssh keys, and therefore the credentials you are specifying are being ignored. To find out if this is the case, disable "look for keys". This can be done like this:
.. code-block:: yaml
export ANSIBLE_PARAMIKO_LOOK_FOR_KEYS=False
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[paramiko_connection]
look_for_keys = False
Error: "connecting to host <hostname> returned an error" or "Bad address"
-------------------------------------------------------------------------
This may occur if the SSH fingerprint hasn't been added to Paramiko's (the Python SSH library) know hosts file.
When using persistent connections with Paramiko, the connection runs in a background process. If the host doesn't already have a valid SSH key, by default Ansible will prompt to add the host key. This will cause connections running in background processes to fail.
For example:
.. code-block:: yaml
2017-04-04 12:06:03,486 p=17981 u=fred | using connection plugin network_cli
2017-04-04 12:06:04,680 p=17981 u=fred | connecting to host veos01 returned an error
2017-04-04 12:06:04,682 p=17981 u=fred | (14, 'Bad address')
2017-04-04 12:06:33,519 p=17981 u=fred | number of connection attempts exceeded, unable to connect to control socket
2017-04-04 12:06:33,520 p=17981 u=fred | persistent_connect_interval=1, persistent_connect_retries=30
Suggestions to resolve:
Use ``ssh-keyscan`` to pre-populate the known_hosts. You need to ensure the keys are correct.
.. code-block:: shell
ssh-keyscan veos01
or
You can tell Ansible to automatically accept the keys
Environment variable method:
.. code-block:: shell
export ANSIBLE_PARAMIKO_HOST_KEY_AUTO_ADD=True
ansible-playbook ...
``ansible.cfg`` method:
ansible.cfg
.. code-block:: ini
[paramiko_connection]
host_key_auto_add = True
.. warning: Security warning
Care should be taken before accepting keys.
Error: "No authentication methods available"
--------------------------------------------
For example:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | creating new control socket for host veos01:None as user admin
2017-04-04 12:19:05,670 p=18591 u=fred | control socket path is /home/fred/.ansible/pc/ca5960d27a
2017-04-04 12:19:05,670 p=18591 u=fred | current working directory is /home/fred/git/ansible-inc/ansible-workspace-2/test/integration
2017-04-04 12:19:05,670 p=18591 u=fred | using connection plugin network_cli
2017-04-04 12:19:06,606 p=18591 u=fred | connecting to host veos01 returned an error
2017-04-04 12:19:06,606 p=18591 u=fred | No authentication methods available
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
Suggestions to resolve:
No password or SSH key supplied
Clearing Out Persistent Connections
-----------------------------------
**Platforms:** Any
In Ansible 2.3, persistent connection sockets are stored in ``~/.ansible/pc`` for all network devices. When an Ansible playbook runs, the persistent socket connection is displayed when verbose output is specified.
``<switch> socket_path: /home/fred/.ansible/pc/f64ddfa760``
To clear out a persistent connection before it times out (the default timeout is 30 seconds
of inactivity), simple delete the socket file.
.. _timeout_issues:
Timeout issues
==============
Persistent connection idle timeout
----------------------------------
By default, ``ANSIBLE_PERSISTENT_CONNECT_TIMEOUT`` is set to 30 (seconds). You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | persistent connection idle timeout triggered, timeout value is 30 secs
Suggestions to resolve:
Increase value of persistent connection idle timeout:
.. code-block:: sh
export ANSIBLE_PERSISTENT_CONNECT_TIMEOUT=60
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
connect_timeout = 60
Command timeout
---------------
By default, ``ANSIBLE_PERSISTENT_COMMAND_TIMEOUT`` is set to 30 (seconds). Prior versions of Ansible had this value set to 10 seconds by default.
You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:05,670 p=18591 u=fred | command timeout triggered, timeout value is 30 secs
Suggestions to resolve:
* Option 1 (Global command timeout setting):
Increase value of command timeout in configuration file or by setting environment variable.
.. code-block:: yaml
export ANSIBLE_PERSISTENT_COMMAND_TIMEOUT=60
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
command_timeout = 60
* Option 2 (Per task command timeout setting):
Increase command timeout per task basis. All network modules support a
timeout value that can be set on a per task basis.
The timeout value controls the amount of time in seconds before the
task will fail if the command has not returned.
For local connection type:
.. FIXME: Detail error here
Suggestions to resolve:
.. code-block:: yaml
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
provider: "{{ cli }}"
timeout: 30
Suggestions to resolve:
.. code-block:: yaml
- name: save running-config
cisco.ios.ios_command:
commands: copy running-config startup-config
vars:
ansible_command_timeout: 60
Some operations take longer than the default 30 seconds to complete. One good
example is saving the current running config on IOS devices to startup config.
In this case, changing the timeout value from the default 30 seconds to 60
seconds will prevent the task from failing before the command completes
successfully.
Persistent connection retry timeout
-----------------------------------
By default, ``ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT`` is set to 15 (seconds). You may see the following error if this value is too low:
.. code-block:: yaml
2017-04-04 12:19:35,708 p=18591 u=fred | connect retry timeout expired, unable to connect to control socket
2017-04-04 12:19:35,709 p=18591 u=fred | persistent_connect_retry_timeout is 15 secs
Suggestions to resolve:
Increase the value of the persistent connection idle timeout.
Note: This value should be greater than the SSH timeout value (the timeout value under the defaults
section in the configuration file) and less than the value of the persistent
connection idle timeout (connect_timeout).
.. code-block:: yaml
export ANSIBLE_PERSISTENT_CONNECT_RETRY_TIMEOUT=30
To make this a permanent change, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
connect_retry_timeout = 30
Timeout issue due to platform specific login menu with ``network_cli`` connection type
--------------------------------------------------------------------------------------
In Ansible 2.9 and later, the network_cli connection plugin configuration options are added
to handle the platform specific login menu. These options can be set as group/host or tasks
variables.
Example: Handle single login menu prompts with host variables
.. code-block:: console
$cat host_vars/<hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Connect to a host"
ansible_terminal_initial_answer:
- "3"
Example: Handle remote host multiple login menu prompts with host variables
.. code-block:: console
$cat host_vars/<inventory-hostname>.yaml
---
ansible_terminal_initial_prompt:
- "Press any key to enter main menu"
- "Connect to a host"
ansible_terminal_initial_answer:
- "\\r"
- "3"
ansible_terminal_initial_prompt_checkall: True
To handle multiple login menu prompts:
* The values of ``ansible_terminal_initial_prompt`` and ``ansible_terminal_initial_answer`` should be a list.
* The prompt sequence should match the answer sequence.
* The value of ``ansible_terminal_initial_prompt_checkall`` should be set to ``True``.
.. note:: If all the prompts in sequence are not received from remote host at the time connection initialization it will result in a timeout.
Playbook issues
===============
This section details issues are caused by issues with the Playbook itself.
Error: "Unable to enter configuration mode"
-------------------------------------------
**Platforms:** Arista EOS and Cisco IOS
This occurs when you attempt to run a task that requires privileged mode in a user mode shell.
For example:
.. code-block:: console
TASK [ios_system : configure name_servers] *****************************************************************************
task path:
fatal: [ios-csr1000v]: FAILED! => {
"changed": false,
"failed": true,
"msg": "unable to enter configuration mode",
}
Suggestions to resolve:
Use ``connection: ansible.netcommon.network_cli`` and ``become: yes``
Proxy Issues
============
.. _network_delegate_to_vs_ProxyCommand:
delegate_to vs ProxyCommand
---------------------------
In order to use a bastion or intermediate jump host to connect to network devices over ``cli``
transport, network modules support the use of ``ProxyCommand``.
To use ``ProxyCommand``, configure the proxy settings in the Ansible inventory
file to specify the proxy host.
.. code-block:: ini
[nxos]
nxos01
nxos02
[nxos:vars]
ansible_ssh_common_args='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
With the configuration above, simply build and run the playbook as normal with
no additional changes necessary. The network module will now connect to the
network device by first connecting to the host specified in
``ansible_ssh_common_args``, which is ``bastion01`` in the above example.
You can also set the proxy target for all hosts by using environment variables.
.. code-block:: sh
export ANSIBLE_SSH_ARGS='-o ProxyCommand="ssh -W %h:%p -q bastion01"'
Using bastion/jump host with netconf connection
-----------------------------------------------
Enabling jump host setting
--------------------------
Bastion/jump host with netconf connection can be enabled by:
- Setting Ansible variable ``ansible_netconf_ssh_config`` either to ``True`` or custom ssh config file path
- Setting environment variable ``ANSIBLE_NETCONF_SSH_CONFIG`` to ``True`` or custom ssh config file path
- Setting ``ssh_config = 1`` or ``ssh_config = <ssh-file-path>`` under ``netconf_connection`` section
If the configuration variable is set to 1 the proxycommand and other ssh variables are read from
default ssh config file (~/.ssh/config).
If the configuration variable is set to file path the proxycommand and other ssh variables are read
from the given custom ssh file path
Example ssh config file (~/.ssh/config)
---------------------------------------
.. code-block:: ini
Host jumphost
HostName jumphost.domain.name.com
User jumphost-user
IdentityFile "/path/to/ssh-key.pem"
Port 22
# Note: Due to the way that Paramiko reads the SSH Config file,
# you need to specify the NETCONF port that the host uses.
# In other words, it does not automatically use ansible_port
# As a result you need either:
Host junos01
HostName junos01
ProxyCommand ssh -W %h:22 jumphost
# OR
Host junos01
HostName junos01
ProxyCommand ssh -W %h:830 jumphost
# Depending on the netconf port used.
Example Ansible inventory file
.. code-block:: ini
[junos]
junos01
[junos:vars]
ansible_connection=ansible.netcommon.netconf
ansible_network_os=junipernetworks.junos.junos
ansible_user=myuser
ansible_password=!vault...
.. note:: Using ``ProxyCommand`` with passwords via variables
By design, SSH doesn't support providing passwords via environment variables.
This is done to prevent secrets from leaking out, for example in ``ps`` output.
We recommend using SSH Keys, and if needed an ssh-agent, rather than passwords, where ever possible.
Miscellaneous Issues
====================
Intermittent failure while using ``ansible.netcommon.network_cli`` connection type
------------------------------------------------------------------------------------
If the command prompt received in response is not matched correctly within
the ``ansible.netcommon.network_cli`` connection plugin the task might fail intermittently with truncated
response or with the error message ``operation requires privilege escalation``.
Starting in 2.7.1 a new buffer read timer is added to ensure prompts are matched properly
and a complete response is send in output. The timer default value is 0.2 seconds and
can be adjusted on a per task basis or can be set globally in seconds.
Example Per task timer setting
.. code-block:: yaml
- name: gather ios facts
cisco.ios.ios_facts:
gather_subset: all
register: result
vars:
ansible_buffer_read_timeout: 2
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
buffer_read_timeout = 2
This timer delay per command executed on remote host can be disabled by setting the value to zero.
Task failure due to mismatched error regex within command response using ``ansible.netcommon.network_cli`` connection type
----------------------------------------------------------------------------------------------------------------------------
In Ansible 2.9 and later, the ``ansible.netcommon.network_cli`` connection plugin configuration options are added
to handle the stdout and stderr regex to identify if the command execution response consist
of a normal response or an error response. These options can be set group/host variables or as
tasks variables.
Example: For mismatched error response
.. code-block:: yaml
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
Playbook run output:
.. code-block:: console
TASK [first fetch logs] ********************************************************
fatal: [ios01]: FAILED! => {
"changed": false,
"msg": "RF Name:\r\n\r\n <--nsip-->
\"IPSEC-3-REPLAY_ERROR: Test log\"\r\n*Aug 1 08:36:18.483: %SYS-7-USERLOG_DEBUG:
Message from tty578(user id: ansible): test\r\nan-ios-02#"}
Suggestions to resolve:
Modify the error regex for individual task.
.. code-block:: yaml
- name: fetch logs from remote host
cisco.ios.ios_command:
commands:
- show logging
vars:
ansible_terminal_stderr_re:
- pattern: 'connection timed out'
flags: 're.I'
The terminal plugin regex options ``ansible_terminal_stderr_re`` and ``ansible_terminal_stdout_re`` have
``pattern`` and ``flags`` as keys. The value of the ``flags`` key should be a value that is accepted by
the ``re.compile`` python method.
Intermittent failure while using ``ansible.netcommon.network_cli`` connection type due to slower network or remote target host
----------------------------------------------------------------------------------------------------------------------------------
In Ansible 2.9 and later, the ``ansible.netcommon.network_cli`` connection plugin configuration option is added to control
the number of attempts to connect to a remote host. The default number of attempts is three.
After every retry attempt the delay between retries is increased by power of 2 in seconds until either the
maximum attempts are exhausted or either the ``persistent_command_timeout`` or ``persistent_connect_timeout`` timers are triggered.
To make this a global setting, add the following to your ``ansible.cfg`` file:
.. code-block:: ini
[persistent_connection]
network_cli_retries = 5
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,001 |
Docs: Replace Latin terms in the reference_appendices/ directory with English terms
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/porting_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/porting_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79001
|
https://github.com/ansible/ansible/pull/79010
|
8f4133b514f1b4c8b528771804b31ff47a4e0f84
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
| 2022-10-03T20:03:36Z |
python
| 2022-10-04T09:33:40Z |
docs/docsite/rst/reference_appendices/faq.rst
|
.. _ansible_faq:
Frequently Asked Questions
==========================
Here are some commonly asked questions and their answers.
.. _collections_transition:
Where did all the modules go?
+++++++++++++++++++++++++++++
In July, 2019, we announced that collections would be the `future of Ansible content delivery <https://www.ansible.com/blog/the-future-of-ansible-content-delivery>`_. A collection is a distribution format for Ansible content that can include playbooks, roles, modules, and plugins. In Ansible 2.9 we added support for collections. In Ansible 2.10 we `extracted most modules from the main ansible/ansible repository <https://access.redhat.com/solutions/5295121>`_ and placed them in :ref:`collections <list_of_collections>`. Collections may be maintained by the Ansible team, by the Ansible community, or by Ansible partners. The `ansible/ansible repository <https://github.com/ansible/ansible>`_ now contains the code for basic features and functions, such as copying module code to managed nodes. This code is also known as ``ansible-core`` (it was briefly called ``ansible-base`` for version 2.10).
* To learn more about using collections, see :ref:`collections`.
* To learn more about developing collections, see :ref:`developing_collections`.
* To learn more about contributing to existing collections, see the individual collection repository for guidelines, or see :ref:`contributing_maintained_collections` to contribute to one of the Ansible-maintained collections.
.. _find_my_module:
Where did this specific module go?
++++++++++++++++++++++++++++++++++
IF you are searching for a specific module, you can check the `runtime.yml <https://github.com/ansible/ansible/blob/devel/lib/ansible/config/ansible_builtin_runtime.yml>`_ file, which lists the first destination for each module that we extracted from the main ansible/ansible repository. Some modules have moved again since then. You can also search on `Ansible Galaxy <https://galaxy.ansible.com/>`_ or ask on one of our :ref:`chat channels <communication_irc>`.
.. _slow_install:
How can I speed up Ansible on systems with slow disks?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible may feel sluggish on systems with slow disks, such as Raspberry PI. See `Ansible might be running slow if libyaml is not available <https://www.jeffgeerling.com/blog/2021/ansible-might-be-running-slow-if-libyaml-not-available>`_ for hints on how to improve this.
.. _set_environment:
How can I set the PATH or any other environment variable for a task or entire play?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting environment variables can be done with the `environment` keyword. It can be used at the task or other levels in the play.
.. code-block:: yaml
shell:
cmd: date
environment:
LANG=fr_FR.UTF-8
.. code-block:: yaml
hosts: servers
environment:
PATH: "{{ ansible_env.PATH }}:/thingy/bin"
SOME: value
.. note:: starting in 2.0.1 the setup task from ``gather_facts`` also inherits the environment directive from the play, you might need to use the ``|default`` filter to avoid errors if setting this at play level.
.. _faq_setting_users_and_ports:
How do I handle different machines needing different user accounts or ports to log in with?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Setting inventory variables in the inventory file is the easiest way.
For instance, suppose these hosts have different usernames and ports:
.. code-block:: ini
[webservers]
asdf.example.com ansible_port=5000 ansible_user=alice
jkl.example.com ansible_port=5001 ansible_user=bob
You can also dictate the connection type to be used, if you want:
.. code-block:: ini
[testcluster]
localhost ansible_connection=local
/path/to/chroot1 ansible_connection=chroot
foo.example.com ansible_connection=paramiko
You may also wish to keep these in group variables instead, or file them in a group_vars/<groupname> file.
See the rest of the documentation for more information about how to organize variables.
.. _use_ssh:
How do I get ansible to reuse connections, enable Kerberized SSH, or have Ansible pay attention to my local SSH config file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Switch your default connection type in the configuration file to ``ssh``, or use ``-c ssh`` to use
Native OpenSSH for connections instead of the python paramiko library. In Ansible 1.2.1 and later, ``ssh`` will be used
by default if OpenSSH is new enough to support ControlPersist as an option.
Paramiko is great for starting out, but the OpenSSH type offers many advanced options. You will want to run Ansible
from a machine new enough to support ControlPersist, if you are using this connection type. You can still manage
older clients. If you are using RHEL 6, CentOS 6, SLES 10 or SLES 11 the version of OpenSSH is still a bit old, so
consider managing from a Fedora or openSUSE client even though you are managing older nodes, or just use paramiko.
We keep paramiko as the default as if you are first installing Ansible on these enterprise operating systems, it offers a better experience for new users.
.. _use_ssh_jump_hosts:
How do I configure a jump host to access servers that I have no direct access to?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set a ``ProxyCommand`` in the
``ansible_ssh_common_args`` inventory variable. Any arguments specified in
this variable are added to the sftp/scp/ssh command line when connecting
to the relevant host(s). Consider the following inventory group:
.. code-block:: ini
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create `group_vars/gatewayed.yml` with the following contents::
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q [email protected]"'
Ansible will append these arguments to the command line when trying to
connect to any hosts in the group ``gatewayed``. (These arguments are used
in addition to any ``ssh_args`` from ``ansible.cfg``, so you do not need to
repeat global ``ControlPersist`` settings in ``ansible_ssh_common_args``.)
Note that ``ssh -W`` is available only with OpenSSH 5.4 or later. With
older versions, it's necessary to execute ``nc %h:%p`` or some equivalent
command on the bastion host.
With earlier versions of Ansible, it was necessary to configure a
suitable ``ProxyCommand`` for one or more hosts in ``~/.ssh/config``,
or globally by setting ``ssh_args`` in ``ansible.cfg``.
.. _ssh_serveraliveinterval:
How do I get Ansible to notice a dead target in a timely manner?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can add ``-o ServerAliveInterval=NumberOfSeconds`` in ``ssh_args`` from ``ansible.cfg``. Without this option,
SSH and therefore Ansible will wait until the TCP connection times out. Another solution is to add ``ServerAliveInterval``
into your global SSH configuration. A good value for ``ServerAliveInterval`` is up to you to decide; keep in mind that
``ServerAliveCountMax=3`` is the SSH default so any value you set will be tripled before terminating the SSH session.
.. _cloud_provider_performance:
How do I speed up run of ansible for servers from cloud providers (EC2, openstack,.. )?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Don't try to manage a fleet of machines of a cloud provider from your laptop.
Rather connect to a management node inside this cloud provider first and run Ansible from there.
.. _python_interpreters:
How do I handle not having a Python interpreter at /usr/bin/python on a remote machine?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While you can write Ansible modules in any language, most Ansible modules are written in Python,
including the ones central to letting Ansible work.
By default, Ansible assumes it can find a :command:`/usr/bin/python` on your remote system that is
either Python2, version 2.6 or higher or Python3, 3.5 or higher.
Setting the inventory variable ``ansible_python_interpreter`` on any host will tell Ansible to
auto-replace the Python interpreter with that value instead. Thus, you can point to any Python you
want on the system if :command:`/usr/bin/python` on your system does not point to a compatible
Python interpreter.
Some platforms may only have Python 3 installed by default. If it is not installed as
:command:`/usr/bin/python`, you will need to configure the path to the interpreter via
``ansible_python_interpreter``. Although most core modules will work with Python 3, there may be some
special purpose ones which do not or you may encounter a bug in an edge case. As a temporary
workaround you can install Python 2 on the managed host and configure Ansible to use that Python via
``ansible_python_interpreter``. If there's no mention in the module's documentation that the module
requires Python 2, you can also report a bug on our `bug tracker
<https://github.com/ansible/ansible/issues>`_ so that the incompatibility can be fixed in a future release.
Do not replace the shebang lines of your python modules. Ansible will do this for you automatically at deploy time.
Also, this works for ANY interpreter, for example ruby: ``ansible_ruby_interpreter``, perl: ``ansible_perl_interpreter``, and so on,
so you can use this for custom modules written in any scripting language and control the interpreter location.
Keep in mind that if you put ``env`` in your module shebang line (``#!/usr/bin/env <other>``),
this facility will be ignored so you will be at the mercy of the remote `$PATH`.
.. _installation_faqs:
How do I handle the package dependencies required by Ansible package dependencies during Ansible installation ?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
While installing Ansible, sometimes you may encounter errors such as `No package 'libffi' found` or `fatal error: Python.h: No such file or directory`
These errors are generally caused by the missing packages, which are dependencies of the packages required by Ansible.
For example, `libffi` package is dependency of `pynacl` and `paramiko` (Ansible -> paramiko -> pynacl -> libffi).
In order to solve these kinds of dependency issues, you might need to install required packages using
the OS native package managers, such as `yum`, `dnf`, or `apt`, or as mentioned in the package installation guide.
Refer to the documentation of the respective package for such dependencies and their installation methods.
Common Platform Issues
++++++++++++++++++++++
What customer platforms does Red Hat support?
---------------------------------------------
A number of them! For a definitive list please see this `Knowledge Base article <https://access.redhat.com/articles/3168091>`_.
Running in a virtualenv
-----------------------
You can install Ansible into a virtualenv on the controller quite simply:
.. code-block:: shell
$ virtualenv ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you want to run under Python 3 instead of Python 2 you may want to change that slightly:
.. code-block:: shell
$ virtualenv -p python3 ansible
$ source ./ansible/bin/activate
$ pip install ansible
If you need to use any libraries which are not available via pip (for instance, SELinux Python
bindings on systems such as Red Hat Enterprise Linux or Fedora that have SELinux enabled), then you
need to install them into the virtualenv. There are two methods:
* When you create the virtualenv, specify ``--system-site-packages`` to make use of any libraries
installed in the system's Python:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
* Copy those files in manually from the system. For instance, for SELinux bindings you might do:
.. code-block:: shell
$ virtualenv ansible --system-site-packages
$ cp -r -v /usr/lib64/python3.*/site-packages/selinux/ ./py3-ansible/lib64/python3.*/site-packages/
$ cp -v /usr/lib64/python3.*/site-packages/*selinux*.so ./py3-ansible/lib64/python3.*/site-packages/
Running on macOS
----------------
When executing Ansible on a system with macOS as a controller machine one might encounter the following error:
.. error::
+[__NSCFConstantString initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to debug.
ERROR! A worker was found in a dead state
In general the recommended workaround is to set the following environment variable in your shell:
.. code-block:: shell
$ export OBJC_DISABLE_INITIALIZE_FORK_SAFETY=YES
Running on BSD
--------------
.. seealso:: :ref:`working_with_bsd`
Running on Solaris
------------------
By default, Solaris 10 and earlier run a non-POSIX shell which does not correctly expand the default
tmp directory Ansible uses ( :file:`~/.ansible/tmp`). If you see module failures on Solaris machines, this
is likely the problem. There are several workarounds:
* You can set ``remote_tmp`` to a path that will expand correctly with the shell you are using
(see the plugin documentation for :ref:`C shell<csh_shell>`, :ref:`fish shell<fish_shell>`,
and :ref:`Powershell<powershell_shell>`). For example, in the ansible config file you can set::
remote_tmp=$HOME/.ansible/tmp
In Ansible 2.5 and later, you can also set it per-host in inventory like this::
solaris1 ansible_remote_tmp=$HOME/.ansible/tmp
* You can set :ref:`ansible_shell_executable<ansible_shell_executable>` to the path to a POSIX compatible shell. For
instance, many Solaris hosts have a POSIX shell located at :file:`/usr/xpg4/bin/sh` so you can set
this in inventory like so::
solaris1 ansible_shell_executable=/usr/xpg4/bin/sh
(bash, ksh, and zsh should also be POSIX compatible if you have any of those installed).
Running on z/OS
---------------
There are a few common errors that one might run into when trying to execute Ansible on z/OS as a target.
* Version 2.7.6 of python for z/OS will not work with Ansible because it represents strings internally as EBCDIC.
To get around this limitation, download and install a later version of `python for z/OS <https://www.rocketsoftware.com/zos-open-source>`_ (2.7.13 or 3.6.1) that represents strings internally as ASCII. Version 2.7.13 is verified to work.
* When ``pipelining = False`` in `/etc/ansible/ansible.cfg` then Ansible modules are transferred in binary mode via sftp however execution of python fails with
.. error::
SyntaxError: Non-UTF-8 code starting with \'\\x83\' in file /a/user1/.ansible/tmp/ansible-tmp-1548232945.35-274513842609025/AnsiballZ_stat.py on line 1, but no encoding declared; see https://python.org/dev/peps/pep-0263/ for details
To fix it set ``pipelining = True`` in `/etc/ansible/ansible.cfg`.
* Python interpret cannot be found in default location ``/usr/bin/python`` on target host.
.. error::
/usr/bin/python: EDC5129I No such file or directory
To fix this set the path to the python installation in your inventory like so::
zos1 ansible_python_interpreter=/usr/lpp/python/python-2017-04-12-py27/python27/bin/python
* Start of python fails with ``The module libpython2.7.so was not found.``
.. error::
EE3501S The module libpython2.7.so was not found.
On z/OS, you must execute python from gnu bash. If gnu bash is installed at ``/usr/lpp/bash``, you can fix this in your inventory by specifying an ``ansible_shell_executable``::
zos1 ansible_shell_executable=/usr/lpp/bash/bin/bash
Running under fakeroot
----------------------
Some issues arise as ``fakeroot`` does not create a full nor POSIX compliant system by default.
It is known that it will not correctly expand the default tmp directory Ansible uses (:file:`~/.ansible/tmp`).
If you see module failures, this is likely the problem.
The simple workaround is to set ``remote_tmp`` to a path that will expand correctly (see documentation of the shell plugin you are using for specifics).
For example, in the ansible config file (or via environment variable) you can set::
remote_tmp=$HOME/.ansible/tmp
.. _use_roles:
What is the best way to make content reusable/redistributable?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
If you have not done so already, read all about "Roles" in the playbooks documentation. This helps you make playbook content
self-contained, and works well with things like git submodules for sharing content with others.
If some of these plugin types look strange to you, see the API documentation for more details about ways Ansible can be extended.
.. _configuration_file:
Where does the configuration file live and what can I configure in it?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
See :ref:`intro_configuration`.
.. _who_would_ever_want_to_disable_cowsay_but_ok_here_is_how:
How do I disable cowsay?
++++++++++++++++++++++++
If cowsay is installed, Ansible takes it upon itself to make your day happier when running playbooks. If you decide
that you would like to work in a professional cow-free environment, you can either uninstall cowsay, set ``nocows=1``
in ``ansible.cfg``, or set the :envvar:`ANSIBLE_NOCOWS` environment variable:
.. code-block:: shell-session
export ANSIBLE_NOCOWS=1
.. _browse_facts:
How do I see a list of all of the ansible\_ variables?
++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible by default gathers "facts" about the machines under management, and these facts can be accessed in playbooks
and in templates. To see a list of all of the facts that are available about a machine, you can run the ``setup`` module
as an ad hoc action:
.. code-block:: shell-session
ansible -m setup hostname
This will print out a dictionary of all of the facts that are available for that particular host. You might want to pipe
the output to a pager.This does NOT include inventory variables or internal 'magic' variables. See the next question
if you need more than just 'facts'.
.. _browse_inventory_vars:
How do I see all the inventory variables defined for my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
By running the following command, you can see inventory variables for a host:
.. code-block:: shell-session
ansible-inventory --list --yaml
.. _browse_host_vars:
How do I see all the variables specific to my host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
To see all host specific variables, which might include facts and other sources:
.. code-block:: shell-session
ansible -m debug -a "var=hostvars['hostname']" localhost
Unless you are using a fact cache, you normally need to use a play that gathers facts first, for facts included in the task above.
.. _host_loops:
How do I loop over a list of hosts in a group, inside of a template?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A pretty common pattern is to iterate over a list of hosts inside of a host group, perhaps to populate a template configuration
file with a list of servers. To do this, you can just access the "$groups" dictionary in your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ host }}
{% endfor %}
If you need to access facts about these hosts, for instance, the IP address of each hostname,
you need to make sure that the facts have been populated. For example, make sure you have a play that talks to db_servers::
- hosts: db_servers
tasks:
- debug: msg="doesn't matter what you do, just that they were talked to previously."
Then you can use the facts inside your template, like this:
.. code-block:: jinja
{% for host in groups['db_servers'] %}
{{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}
.. _programatic_access_to_a_variable:
How do I access a variable name programmatically?
+++++++++++++++++++++++++++++++++++++++++++++++++
An example may come up where we need to get the ipv4 address of an arbitrary interface, where the interface to be used may be supplied
via a role parameter or other input. Variable names can be built by adding strings together using "~", like so:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface]['ipv4']['address'] }}
The trick about going through hostvars is necessary because it's a dictionary of the entire namespace of variables. ``inventory_hostname``
is a magic variable that indicates the current host you are looping over in the host loop.
In the example above, if your interface names have dashes, you must replace them with underscores:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['ansible_' ~ which_interface | replace('_', '-') ]['ipv4']['address'] }}
Also see dynamic_variables_.
.. _access_group_variable:
How do I access a group variable?
+++++++++++++++++++++++++++++++++
Technically, you don't, Ansible does not really use groups directly. Groups are labels for host selection and a way to bulk assign variables,
they are not a first class entity, Ansible only cares about Hosts and Tasks.
That said, you could just access the variable by selecting a host that is part of that group, see first_host_in_a_group_ below for an example.
.. _first_host_in_a_group:
How do I access a variable of the first host in a group?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++
What happens if we want the ip address of the first webserver in the webservers group? Well, we can do that too. Note that if we
are using dynamic inventory, which host is the 'first' may not be consistent, so you wouldn't want to do this unless your inventory
is static and predictable. (If you are using AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, it will use database order, so this isn't a problem even if you are using cloud
based inventory scripts).
Anyway, here's the trick:
.. code-block:: jinja
{{ hostvars[groups['webservers'][0]]['ansible_eth0']['ipv4']['address'] }}
Notice how we're pulling out the hostname of the first machine of the webservers group. If you are doing this in a template, you
could use the Jinja2 '#set' directive to simplify this, or in a playbook, you could also use set_fact::
- set_fact: headnode={{ groups['webservers'][0] }}
- debug: msg={{ hostvars[headnode].ansible_eth0.ipv4.address }}
Notice how we interchanged the bracket syntax for dots -- that can be done anywhere.
.. _file_recursion:
How do I copy files recursively onto a target host?
+++++++++++++++++++++++++++++++++++++++++++++++++++
The ``copy`` module has a recursive parameter. However, take a look at the ``synchronize`` module if you want to do something more efficient
for a large number of files. The ``synchronize`` module wraps rsync. See the module index for info on both of these modules.
.. _shell_env:
How do I access shell environment variables?
++++++++++++++++++++++++++++++++++++++++++++
**On controller machine :** Access existing variables from controller use the ``env`` lookup plugin.
For example, to access the value of the HOME environment variable on the management machine::
---
# ...
vars:
local_home: "{{ lookup('env','HOME') }}"
**On target machines :** Environment variables are available via facts in the ``ansible_env`` variable:
.. code-block:: jinja
{{ ansible_env.HOME }}
If you need to set environment variables for TASK execution, see :ref:`playbooks_environment`
in the :ref:`Advanced Playbooks <playbooks_special_topics>` section.
There are several ways to set environment variables on your target machines. You can use the
:ref:`template <template_module>`, :ref:`replace <replace_module>`, or :ref:`lineinfile <lineinfile_module>`
modules to introduce environment variables into files. The exact files to edit vary depending on your OS
and distribution and local configuration.
.. _user_passwords:
How do I generate encrypted passwords for the user module?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Ansible ad hoc command is the easiest option:
.. code-block:: shell-session
ansible all -i localhost, -m debug -a "msg={{ 'mypassword' | password_hash('sha512', 'mysecretsalt') }}"
The ``mkpasswd`` utility that is available on most Linux systems is also a great option:
.. code-block:: shell-session
mkpasswd --method=sha-512
If this utility is not installed on your system (for example, you are using macOS) then you can still easily
generate these passwords using Python. First, ensure that the `Passlib <https://foss.heptapod.net/python-libs/passlib/-/wikis/home>`_
password hashing library is installed:
.. code-block:: shell-session
pip install passlib
Once the library is ready, SHA512 password values can then be generated as follows:
.. code-block:: shell-session
python -c "from passlib.hash import sha512_crypt; import getpass; print(sha512_crypt.using(rounds=5000).hash(getpass.getpass()))"
Use the integrated :ref:`hash_filters` to generate a hashed version of a password.
You shouldn't put plaintext passwords in your playbook or host_vars; instead, use :ref:`playbooks_vault` to encrypt sensitive data.
In OpenBSD, a similar option is available in the base system called ``encrypt (1)``
.. _dot_or_array_notation:
Ansible allows dot notation and array notation for variables. Which notation should I use?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The dot notation comes from Jinja and works fine for variables without special
characters. If your variable contains dots (.), colons (:), or dashes (-), if
a key begins and ends with two underscores, or if a key uses any of the known
public attributes, it is safer to use the array notation. See :ref:`playbooks_variables`
for a list of the known public attributes.
.. code-block:: jinja
item[0]['checksum:md5']
item['section']['2.1']
item['region']['Mid-Atlantic']
It is {{ temperature['Celsius']['-3'] }} outside.
Also array notation allows for dynamic variable composition, see dynamic_variables_.
Another problem with 'dot notation' is that some keys can cause problems because they collide with attributes and methods of python dictionaries.
.. code-block:: jinja
item.update # this breaks if item is a dictionary, as 'update()' is a python method for dictionaries
item['update'] # this works
.. _argsplat_unsafe:
When is it unsafe to bulk-set task arguments from a variable?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
You can set all of a task's arguments from a dictionary-typed variable. This
technique can be useful in some dynamic execution scenarios. However, it
introduces a security risk. We do not recommend it, so Ansible issues a
warning when you do something like this::
#...
vars:
usermod_args:
name: testuser
state: present
update_password: always
tasks:
- user: '{{ usermod_args }}'
This particular example is safe. However, constructing tasks like this is
risky because the parameters and values passed to ``usermod_args`` could
be overwritten by malicious values in the ``host facts`` on a compromised
target machine. To mitigate this risk:
* set bulk variables at a level of precedence greater than ``host facts`` in the order of precedence
found in :ref:`ansible_variable_precedence` (the example above is safe because play vars take
precedence over facts)
* disable the :ref:`inject_facts_as_vars` configuration setting to prevent fact values from colliding
with variables (this will also disable the original warning)
.. _commercial_support:
Can I get training on Ansible?
++++++++++++++++++++++++++++++
Yes! See our `services page <https://www.ansible.com/products/consulting>`_ for information on our services
and training offerings. Email `[email protected] <mailto:[email protected]>`_ for further details.
We also offer free web-based training classes on a regular basis. See our
`webinar page <https://www.ansible.com/resources/webinars-training>`_ for more info on upcoming webinars.
.. _web_interface:
Is there a web interface / REST API / GUI?
++++++++++++++++++++++++++++++++++++++++++++
Yes! The open-source web interface is Ansible AWX. The supported Red Hat product that makes Ansible even more powerful and easy to use is :ref:`Red Hat Ansible Automation Platform <ansible_platform>`.
.. _keep_secret_data:
How do I keep secret data in my playbook?
+++++++++++++++++++++++++++++++++++++++++
If you would like to keep secret data in your Ansible content and still share it publicly or keep things in source control, see :ref:`playbooks_vault`.
If you have a task that you don't want to show the results or command given to it when using -v (verbose) mode, the following task or playbook attribute can be useful::
- name: secret task
shell: /usr/bin/do_something --value={{ secret_value }}
no_log: True
This can be used to keep verbose output but hide sensitive information from others who would otherwise like to be able to see the output.
The ``no_log`` attribute can also apply to an entire play::
- hosts: all
no_log: True
Though this will make the play somewhat difficult to debug. It's recommended that this
be applied to single tasks only, once a playbook is completed. Note that the use of the
``no_log`` attribute does not prevent data from being shown when debugging Ansible itself via
the :envvar:`ANSIBLE_DEBUG` environment variable.
.. _when_to_use_brackets:
.. _dynamic_variables:
.. _interpolate_variables:
When should I use {{ }}? Also, how to interpolate variables or dynamic variable names
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
A steadfast rule is 'always use ``{{ }}`` except when ``when:``'.
Conditionals are always run through Jinja2 as to resolve the expression,
so ``when:``, ``failed_when:`` and ``changed_when:`` are always templated and you should avoid adding ``{{ }}``.
In most other cases you should always use the brackets, even if previously you could use variables without
specifying (like ``loop`` or ``with_`` clauses), as this made it hard to distinguish between an undefined variable and a string.
Another rule is 'moustaches don't stack'. We often see this:
.. code-block:: jinja
{{ somevar_{{other_var}} }}
The above DOES NOT WORK as you expect, if you need to use a dynamic variable use the following as appropriate:
.. code-block:: jinja
{{ hostvars[inventory_hostname]['somevar_' ~ other_var] }}
For 'non host vars' you can use the :ref:`vars lookup<vars_lookup>` plugin:
.. code-block:: jinja
{{ lookup('vars', 'somevar_' ~ other_var) }}
To determine if a keyword requires ``{{ }}`` or even supports templating, use ``ansible-doc -t keyword <name>``,
this will return documentation on the keyword including a ``template`` field with the values ``explicit`` (requires ``{{ }}``),
``implicit`` (assumes ``{{ }}``, so no needed) or ``static`` (no templating supported, all characters will be interpreted literally)
.. _ansible_host_delegated:
How do I get the original ansible_host when I delegate a task?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
As the documentation states, connection variables are taken from the ``delegate_to`` host so ``ansible_host`` is overwritten,
but you can still access the original via ``hostvars``::
original_host: "{{ hostvars[inventory_hostname]['ansible_host'] }}"
This works for all overridden connection variables, like ``ansible_user``, ``ansible_port``, and so on.
.. _scp_protocol_error_filename:
How do I fix 'protocol error: filename does not match request' when fetching a file?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Since release ``7.9p1`` of OpenSSH there is a `bug <https://bugzilla.mindrot.org/show_bug.cgi?id=2966>`_
in the SCP client that can trigger this error on the Ansible controller when using SCP as the file transfer mechanism::
failed to transfer file to /tmp/ansible/file.txt\r\nprotocol error: filename does not match request
In these releases, SCP tries to validate that the path of the file to fetch matches the requested path.
The validation
fails if the remote filename requires quotes to escape spaces or non-ascii characters in its path. To avoid this error:
* Use SFTP instead of SCP by setting ``scp_if_ssh`` to ``smart`` (which tries SFTP first) or to ``False``. You can do this in one of four ways:
* Rely on the default setting, which is ``smart`` - this works if ``scp_if_ssh`` is not explicitly set anywhere
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>` in inventory: ``ansible_scp_if_ssh: False``
* Set an environment variable on your control node: ``export ANSIBLE_SCP_IF_SSH=False``
* Pass an environment variable when you run Ansible: ``ANSIBLE_SCP_IF_SSH=smart ansible-playbook``
* Modify your ``ansible.cfg`` file: add ``scp_if_ssh=False`` to the ``[ssh_connection]`` section
* If you must use SCP, set the ``-T`` arg to tell the SCP client to ignore path validation. You can do this in one of three ways:
* Set a :ref:`host variable <host_variables>` or :ref:`group variable <group_variables>`: ``ansible_scp_extra_args=-T``,
* Export or pass an environment variable: ``ANSIBLE_SCP_EXTRA_ARGS=-T``
* Modify your ``ansible.cfg`` file: add ``scp_extra_args=-T`` to the ``[ssh_connection]`` section
.. note:: If you see an ``invalid argument`` error when using ``-T``, then your SCP client is not performing filename validation and will not trigger this error.
.. _mfa_support:
Does Ansible support multiple factor authentication 2FA/MFA/biometrics/finterprint/usbkey/OTP/...
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
No, Ansible is designed to execute multiple tasks against multiple targets, minimizing user interaction.
As with most automation tools, it is not compatible with interactive security systems designed to handle human interaction.
Most of these systems require a secondary prompt per target, which prevents scaling to thousands of targets. They also
tend to have very short expiration periods so it requires frequent reauthorization, also an issue with many hosts and/or
a long set of tasks.
In such environments we recommend securing around Ansible's execution but still allowing it to use an 'automation user' that does not require such measures.
With AWX or the :ref:`Red Hat Ansible Automation Platform <ansible_platform>`, administrators can set up RBAC access to inventory, along with managing credentials and job execution.
.. _complex_configuration_validation:
The 'validate' option is not enough for my needs, what do I do?
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Many Ansible modules that create or update files have a ``validate`` option that allows you to abort the update if the validation command fails.
This uses the temporary file Ansible creates before doing the final update. In many cases this does not work since the validation tools
for the specific application require either specific names, multiple files or some other factor that is not present in this simple feature.
For these cases you have to handle the validation and restoration yourself. The following is a simple example of how to do this with block/rescue
and backups, which most file based modules also support:
.. code-block:: yaml
- name: update config and backout if validation fails
block:
- name: do the actual update, works with copy, lineinfile and any action that allows for `backup`.
template: src=template.j2 dest=/x/y/z backup=yes moreoptions=stuff
register: updated
- name: run validation, this will change a lot as needed. We assume it returns an error when not passing, use `failed_when` if otherwise.
shell: run_validation_commmand
become: true
become_user: requiredbyapp
environment:
WEIRD_REQUIREMENT: 1
rescue:
- name: restore backup file to original, in the hope the previous configuration was working.
copy:
remote_src: true
dest: /x/y/z
src: "{{ updated['backup_file'] }}"
always:
- name: We choose to always delete backup, but could copy or move, or only delete in rescue.
file:
path: "{{ updated['backup_file'] }}"
state: absent
.. _jinja2_faqs:
Why does the ``regex_search`` filter return `None` instead of an empty string?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Until the jinja2 2.10 release, Jinja was only able to return strings, but Ansible needed Python objects in some cases. Ansible uses ``safe_eval`` and only sends strings that look like certain types of Python objects through this function. With ``regex_search`` that does not find a match, the result (``None``) is converted to the string "None" which is not useful in non-native jinja2.
The following example of a single templating action shows this behavior:
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') }}
This example does not result in a Python ``None``, so Ansible historically converted it to "" (empty string).
The native jinja2 functionality actually allows us to return full Python objects, that are always represented as Python objects everywhere, and as such the result of a single templating action with ``regex_search`` can result in the Python ``None``.
.. note::
Native jinja2 functionality is not needed when ``regex_search`` is used as an intermediate result that is then compared to the jinja2 ``none`` test.
.. code-block:: Jinja
{{ 'ansible' | regex_search('foobar') is none }}
.. _docs_contributions:
How do I submit a change to the documentation?
++++++++++++++++++++++++++++++++++++++++++++++
Documentation for Ansible is kept in the main project git repository, and complete instructions
for contributing can be found in the docs README `viewable on GitHub <https://github.com/ansible/ansible/blob/devel/docs/docsite/README.md>`_. Thanks!
.. _legacy_vs_builtin:
What is the difference between ``ansible.legacy`` and ``ansible.builtin`` collections?
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Neither is a real collection. They are virtually constructed by the core engine (synthetic collections).
The ``ansible.builtin`` collection only refers to plugins that ship with ``ansible-core``.
The ``ansible.legacy`` collection is a superset of ``ansible.builtin`` (you can reference the plugins from builtin through ``ansible.legacy``). You also get the ability to
add 'custom' plugins in the :ref:`configured paths and adjacent directories <ansible_search_path>`, with the ability to override the builtin plugins that have the same name.
Also, ``ansible.legacy`` is what you get by default when you do not specify an FQCN.
So this:
.. code-block:: yaml
- shell: echo hi
Is really equivalent to:
.. code-block:: yaml
- ansible.legacy.shell: echo hi
Though, if you do not override the ``shell`` module, you can also just write it as ``ansible.builtin.shell``, since legacy will resolve to the builtin collection.
.. _i_dont_see_my_question:
I don't see my question here
++++++++++++++++++++++++++++
If you have not found an answer to your questions, you can ask on one of our mailing lists or chat channels. For instructions on subscribing to a list or joining a chat channel, see :ref:`communication`.
.. seealso::
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-project>`_
Have a question? Stop by the google group!
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,001 |
Docs: Replace Latin terms in the reference_appendices/ directory with English terms
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/porting_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/porting_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79001
|
https://github.com/ansible/ansible/pull/79010
|
8f4133b514f1b4c8b528771804b31ff47a4e0f84
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
| 2022-10-03T20:03:36Z |
python
| 2022-10-04T09:33:40Z |
docs/docsite/rst/reference_appendices/general_precedence.rst
|
.. _general_precedence_rules:
Controlling how Ansible behaves: precedence rules
=================================================
To give you maximum flexibility in managing your environments, Ansible offers many ways to control how Ansible behaves: how it connects to managed nodes, how it works once it has connected.
If you use Ansible to manage a large number of servers, network devices, and cloud resources, you may define Ansible behavior in several different places and pass that information to Ansible in several different ways.
This flexibility is convenient, but it can backfire if you do not understand the precedence rules.
These precedence rules apply to any setting that can be defined in multiple ways (by configuration settings, command-line options, playbook keywords, variables).
.. contents::
:local:
Precedence categories
---------------------
Ansible offers four sources for controlling its behavior. In order of precedence from lowest (most easily overridden) to highest (overrides all others), the categories are:
* Configuration settings
* Command-line options
* Playbook keywords
* Variables
Each category overrides any information from all lower-precedence categories. For example, a playbook keyword will override any configuration setting.
Within each precedence category, specific rules apply. However, generally speaking, 'last defined' wins and overrides any previous definitions.
Configuration settings
^^^^^^^^^^^^^^^^^^^^^^
:ref:`Configuration settings<ansible_configuration_settings>` include both values from the ``ansible.cfg`` file and environment variables. Within this category, values set in configuration files have lower precedence. Ansible uses the first ``ansible.cfg`` file it finds, ignoring all others. Ansible searches for ``ansible.cfg`` in these locations in order:
* ``ANSIBLE_CONFIG`` (environment variable if set)
* ``ansible.cfg`` (in the current directory)
* ``~/.ansible.cfg`` (in the home directory)
* ``/etc/ansible/ansible.cfg``
Environment variables have a higher precedence than entries in ``ansible.cfg``. If you have environment variables set on your control node, they override the settings in whichever ``ansible.cfg`` file Ansible loads. The value of any given environment variable follows normal shell precedence: the last value defined overwrites previous values.
Command-line options
^^^^^^^^^^^^^^^^^^^^
Any command-line option will override any configuration setting.
When you type something directly at the command line, you may feel that your hand-crafted values should override all others, but Ansible does not work that way. Command-line options have low precedence - they override configuration only. They do not override playbook keywords, variables from inventory or variables from playbooks.
You can override all other settings from all other sources in all other precedence categories at the command line by :ref:`general_precedence_extra_vars`, but that is not a command-line option, it is a way of passing a :ref:`variable<general_precedence_variables>`.
At the command line, if you pass multiple values for a parameter that accepts only a single value, the last defined value wins. For example, this :ref:`ad hoc task<intro_adhoc>` will connect as ``carol``, not as ``mike``::
ansible -u mike -m ping myhost -u carol
Some parameters allow multiple values. In this case, Ansible will append all values from the hosts listed in inventory files inventory1 and inventory2::
ansible -i /path/inventory1 -i /path/inventory2 -m ping all
The help for each :ref:`command-line tool<command_line_tools>` lists available options for that tool.
Playbook keywords
^^^^^^^^^^^^^^^^^
Any :ref:`playbook keyword<playbook_keywords>` will override any command-line option and any configuration setting.
Within playbook keywords, precedence flows with the playbook itself; the more specific wins against the more general:
- play (most general)
- blocks/includes/imports/roles (optional and can contain tasks and each other)
- tasks (most specific)
A simple example::
- hosts: all
connection: ssh
tasks:
- name: This task uses ssh.
ping:
- name: This task uses paramiko.
connection: paramiko
ping:
In this example, the ``connection`` keyword is set to ``ssh`` at the play level. The first task inherits that value, and connects using ``ssh``. The second task inherits that value, overrides it, and connects using ``paramiko``.
The same logic applies to blocks and roles as well. All tasks, blocks, and roles within a play inherit play-level keywords; any task, block, or role can override any keyword by defining a different value for that keyword within the task, block, or role.
Remember that these are KEYWORDS, not variables. Both playbooks and variable files are defined in YAML but they have different significance.
Playbooks are the command or 'state description' structure for Ansible, variables are data we use to help make playbooks more dynamic.
.. _general_precedence_variables:
Variables
^^^^^^^^^
Any variable will override any playbook keyword, any command-line option, and any configuration setting.
Variables that have equivalent playbook keywords, command-line options, and configuration settings are known as :ref:`connection_variables`. Originally designed for connection parameters, this category has expanded to include other core variables like the temporary directory and the python interpreter.
Connection variables, like all variables, can be set in multiple ways and places. You can define variables for hosts and groups in :ref:`inventory<intro_inventory>`. You can define variables for tasks and plays in ``vars:`` blocks in :ref:`playbooks<about_playbooks>`. However, they are still variables - they are data, not keywords or configuration settings. Variables that override playbook keywords, command-line options, and configuration settings follow the same rules of :ref:`variable precedence <ansible_variable_precedence>` as any other variables.
When set in a playbook, variables follow the same inheritance rules as playbook keywords. You can set a value for the play, then override it in a task, block, or role::
- hosts: cloud
gather_facts: false
become: true
vars:
ansible_become_user: admin
tasks:
- name: This task uses admin as the become user.
dnf:
name: some-service
state: latest
- block:
- name: This task uses service-admin as the become user.
# a task to configure the new service
- name: This task also uses service-admin as the become user, defined in the block.
# second task to configure the service
vars:
ansible_become_user: service-admin
- name: This task (outside of the block) uses admin as the become user again.
service:
name: some-service
state: restarted
Variable scope: how long is a value available?
""""""""""""""""""""""""""""""""""""""""""""""
Variable values set in a playbook exist only within the playbook object that defines them. These 'playbook object scope' variables are not available to subsequent objects, including other plays.
Variable values associated directly with a host or group, including variables defined in inventory, by vars plugins, or using modules like :ref:`set_fact<set_fact_module>` and :ref:`include_vars<include_vars_module>`, are available to all plays. These 'host scope' variables are also available via the ``hostvars[]`` dictionary.
.. _general_precedence_extra_vars:
Using ``-e`` extra variables at the command line
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
To override all other settings in all other categories, you can use extra variables: ``--extra-vars`` or ``-e`` at the command line. Values passed with ``-e`` are variables, not command-line options, and they will override configuration settings, command-line options, and playbook keywords as well as variables set elsewhere. For example, this task will connect as ``brian`` not as ``carol``::
ansible -u carol -e 'ansible_user=brian' -a whoami all
You must specify both the variable name and the value with ``--extra-vars``.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,001 |
Docs: Replace Latin terms in the reference_appendices/ directory with English terms
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/porting_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/porting_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79001
|
https://github.com/ansible/ansible/pull/79010
|
8f4133b514f1b4c8b528771804b31ff47a4e0f84
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
| 2022-10-03T20:03:36Z |
python
| 2022-10-04T09:33:40Z |
docs/docsite/rst/reference_appendices/glossary.rst
|
Glossary
========
The following is a list (and re-explanation) of term definitions used elsewhere in the Ansible documentation.
Consult the documentation home page for the full documentation and to see the terms in context, but this should be a good resource
to check your knowledge of Ansible's components and understand how they fit together. It's something you might wish to read for review or
when a term comes up on the mailing list.
.. glossary::
Action
An action is a part of a task that specifies which of the modules to
run and which arguments to pass to that module. Each task can have
only one action, but it may also have other parameters.
Ad Hoc
Refers to running Ansible to perform some quick command, using
:command:`/usr/bin/ansible`, rather than the :term:`orchestration`
language, which is :command:`/usr/bin/ansible-playbook`. An example
of an ad hoc command might be rebooting 50 machines in your
infrastructure. Anything you can do ad hoc can be accomplished by
writing a :term:`playbook <playbooks>` and playbooks can also glue
lots of other operations together.
Ansible (the package)
A software package (Python, deb, rpm, and so on) that contains ansible-core and a select group of collections. Playbooks that worked with Ansible 2.9 should still work with the Ansible 2.10 package. See the :file:`ansible-<version>.build` file in the release-specific directory at `ansible-build-data <https://github.com/ansible-community/ansible-build-data>`_ for a list of collections included in Ansible, as well as the included ``ansible-core`` version.
ansible-base
Used only for 2.10. The installable package (RPM/Python/Deb package) generated from the `ansible/ansible repository <https://github.com/ansible/ansible>`_. See ``ansible-core``.
ansible-core
Name used starting with 2.11. The installable package (RPM/Python/Deb package) generated from the `ansible/ansible repository <https://github.com/ansible/ansible>`_. Contains the command-line tools and the code for basic features and functions, such as copying module code to managed nodes. The ``ansible-core`` package includes a few modules and plugins and allows you to add others by installing collections.
Ansible Galaxy
An `online distribution server <galaxy.ansible.com>`_ for finding and sharing Ansible community content, sometimes referred to as community Galaxy. Also, the command-line utility that lets users install individual Ansible Collections, for example ``ansible-galaxy collection install community.crypto``.
Async
Refers to a task that is configured to run in the background rather
than waiting for completion. If you have a long process that would
run longer than the SSH timeout, it would make sense to launch that
task in async mode. Async modes can poll for completion every so many
seconds or can be configured to "fire and forget", in which case
Ansible will not even check on the task again; it will just kick it
off and proceed to future steps. Async modes work with both
:command:`/usr/bin/ansible` and :command:`/usr/bin/ansible-playbook`.
Callback Plugin
Refers to some user-written code that can intercept results from
Ansible and do something with them. Some supplied examples in the
GitHub project perform custom logging, send email, or even play sound
effects.
Check Mode
Refers to running Ansible with the ``--check`` option, which does not
make any changes on the remote systems, but only outputs the changes
that might occur if the command ran without this flag. This is
analogous to so-called "dry run" modes in other systems, though the
user should be warned that this does not take into account unexpected
command failures or cascade effects (which is true of similar modes in
other systems). Use this to get an idea of what might happen, but do
not substitute it for a good staging environment.
Collection
A packaging format for bundling and distributing Ansible content, including plugins, roles, modules, and more. Collections release independent of other collections or ``ansible-core`` so features can be available sooner to users. Some collections are packaged with Ansible (version 2.10 or later). You can install other collections (or other versions of collections) with ``ansible-galaxy collection install <namespace.collection>``.
Collection name
The second part of a Fully Qualified Collection Name. The collection name divides the collection namespace and usually reflects the function of the collection content. For example, the ``cisco`` namespace might contain ``cisco.ios``, ``cisco.aci``, and ``cisco.nxos``, with content for managing the different network devices maintained by Cisco.
community.general (collection)
A special collection managed by the Ansible Community Team containing all the modules and plugins which shipped in Ansible 2.9 that do not have their own dedicated Collection. See `community.general <https://galaxy.ansible.com/community/general>`_ on Galaxy.
community.network (collection)
Similar to ``community.general``, focusing on network content. `community.network <https://galaxy.ansible.com/community/network>`_ on Galaxy.
Connection Plugin
By default, Ansible talks to remote machines through pluggable
libraries. Ansible uses native OpenSSH (:term:`SSH (Native)`) or
a Python implementation called :term:`paramiko`. OpenSSH is preferred
if you are using a recent version, and also enables some features like
Kerberos and jump hosts. This is covered in the :ref:`getting
started section <remote_connection_information>`. There are also
other connection types like ``accelerate`` mode, which must be
bootstrapped over one of the SSH-based connection types but is very
fast, and local mode, which acts on the local system. Users can also
write their own connection plugins.
Conditionals
A conditional is an expression that evaluates to true or false that
decides whether a given task is executed on a given machine or not.
Ansible's conditionals are powered by the 'when' statement, which are
discussed in the :ref:`working_with_playbooks`.
Declarative
An approach to achieving a task that uses a description of the
final state rather than a description of the sequence of steps
necessary to achieve that state. For a real world example, a
declarative specification of a task would be: "put me in California".
Depending on your current location, the sequence of steps to get you to
California may vary, and if you are already in California, nothing
at all needs to be done. Ansible's Resources are declarative; it
figures out the steps needed to achieve the final state. It also lets
you know whether or not any steps needed to be taken to get to the
final state.
Diff Mode
A ``--diff`` flag can be passed to Ansible to show what changed on
modules that support it. You can combine it with ``--check`` to get a
good 'dry run'. File diffs are normally in unified diff format.
Distribution server
A server, such as Ansible Galaxy or Red Hat Automation Hub where you can distribute your collections and allow others to access these collections. See :ref:`distributing_collections` for a list of distribution server types. Some Ansible features are only available on certain distribution servers.
Executor
A core software component of Ansible that is the power behind
:command:`/usr/bin/ansible` directly -- and corresponds to the
invocation of each task in a :term:`playbook <playbooks>`. The
Executor is something Ansible developers may talk about, but it's not
really user land vocabulary.
Facts
Facts are simply things that are discovered about remote nodes. While
they can be used in :term:`playbooks` and templates just like
variables, facts are things that are inferred, rather than set. Facts
are automatically discovered by Ansible when running plays by
executing the internal :ref:`setup module <setup_module>` on the remote nodes. You
never have to call the setup module explicitly, it just runs, but it
can be disabled to save time if it is not needed or you can tell
ansible to collect only a subset of the full facts via the
``gather_subset:`` option. For the convenience of users who are
switching from other configuration management systems, the fact module
will also pull in facts from the :program:`ohai` and :program:`facter`
tools if they are installed. These are fact libraries from Chef and
Puppet, respectively. (These may also be disabled via
``gather_subset:``)
Filter Plugin
A filter plugin is something that most users will never need to
understand. These allow for the creation of new :term:`Jinja2`
filters, which are more or less only of use to people who know what
Jinja2 filters are. If you need them, you can learn how to write them
in the :ref:`API docs section <developing_filter_plugins>`.
Forks
Ansible talks to remote nodes in parallel and the level of parallelism
can be set either by passing ``--forks`` or editing the default in
a configuration file. The default is a very conservative five (5)
forks, though if you have a lot of RAM, you can easily set this to
a value like 50 for increased parallelism.
Fully Qualified Collection Name (FQCN)
The full definition of a module, plugin, or role hosted within a collection, in the form <namespace.collection.content_name>. Allows a Playbook to refer to a specific module or plugin from a specific source in an unambiguous manner, for example, ``community.grafana.grafana_dashboard``. The FQCN is required when you want to specify the exact source of a plugin. For example, if multiple collections contain a module plugin called ``user``, the FQCN specifies which one to use for a given task. When you have multiple collections installed, the FQCN is always the explicit and authoritative indicator of which collection to search for the correct plugin for each task.
Gather Facts (Boolean)
:term:`Facts` are mentioned above. Sometimes when running a multi-play
:term:`playbook <playbooks>`, it is desirable to have some plays that
don't bother with fact computation if they aren't going to need to
utilize any of these values. Setting ``gather_facts: False`` on
a playbook allows this implicit fact gathering to be skipped.
Globbing
Globbing is a way to select lots of hosts based on wildcards, rather
than the name of the host specifically, or the name of the group they
are in. For instance, it is possible to select ``ww*`` to match all
hosts starting with ``www``. This concept is pulled directly from
:program:`Func`, one of Michael DeHaan's (an Ansible Founder) earlier
projects. In addition to basic globbing, various set operations are
also possible, such as 'hosts in this group and not in another group',
and so on.
Group
A group consists of several hosts assigned to a pool that can be
conveniently targeted together, as well as given variables that they
share in common.
Group Vars
The :file:`group_vars/` files are files that live in a directory
alongside an inventory file, with an optional filename named after
each group. This is a convenient place to put variables that are
provided to a given group, especially complex data structures, so that
these variables do not have to be embedded in the :term:`inventory`
file or :term:`playbook <playbooks>`.
Handlers
Handlers are just like regular tasks in an Ansible
:term:`playbook <playbooks>` (see :term:`Tasks`) but are only run if
the Task contains a ``notify`` keyword and also indicates that it
changed something. For example, if a config file is changed, then the
task referencing the config file templating operation may notify
a service restart handler. This means services can be bounced only if
they need to be restarted. Handlers can be used for things other than
service restarts, but service restarts are the most common usage.
Host
A host is simply a remote machine that Ansible manages. They can have
individual variables assigned to them, and can also be organized in
groups. All hosts have a name they can be reached at (which is either
an IP address or a domain name) and, optionally, a port number, if they
are not to be accessed on the default SSH port.
Host Specifier
Each :term:`Play <plays>` in Ansible maps a series of :term:`tasks` (which define the role,
purpose, or orders of a system) to a set of systems.
This ``hosts:`` keyword in each play is often called the hosts specifier.
It may select one system, many systems, one or more groups, or even
some hosts that are in one group and explicitly not in another.
Host Vars
Just like :term:`Group Vars`, a directory alongside the inventory file named
:file:`host_vars/` can contain a file named after each hostname in the
inventory file, in :term:`YAML` format. This provides a convenient place to
assign variables to the host without having to embed them in the
:term:`inventory` file. The Host Vars file can also be used to define complex
data structures that can't be represented in the inventory file.
Idempotency
An operation is idempotent if the result of performing it once is
exactly the same as the result of performing it repeatedly without
any intervening actions.
Includes
The idea that :term:`playbook <playbooks>` files (which are nothing
more than lists of :term:`plays`) can include other lists of plays,
and task lists can externalize lists of :term:`tasks` in other files,
and similarly with :term:`handlers`. Includes can be parameterized,
which means that the loaded file can pass variables. For instance, an
included play for setting up a WordPress blog may take a parameter
called ``user`` and that play could be included more than once to
create a blog for both ``alice`` and ``bob``.
Inventory
A file (by default, Ansible uses a simple INI format) that describes
:term:`Hosts <Host>` and :term:`Groups <Group>` in Ansible. Inventory
can also be provided via an :term:`Inventory Script` (sometimes called
an "External Inventory Script").
Inventory Script
A very simple program (or a complicated one) that looks up
:term:`hosts <Host>`, :term:`group` membership for hosts, and variable
information from an external resource -- whether that be a SQL
database, a CMDB solution, or something like LDAP. This concept was
adapted from Puppet (where it is called an "External Nodes
Classifier") and works more or less exactly the same way.
Jinja2
Jinja2 is the preferred templating language of Ansible's template
module. It is a very simple Python template language that is
generally readable and easy to write.
JSON
Ansible uses JSON for return data from remote modules. This allows
modules to be written in any language, not just Python.
Keyword
The main expressions that make up Ansible, which apply to playbook objects
(Play, Block, Role and Task). For example 'vars:' is a keyword that lets
you define variables in the scope of the playbook object it is applied to.
Lazy Evaluation
In general, Ansible evaluates any variables in
:term:`playbook <playbooks>` content at the last possible second,
which means that if you define a data structure that data structure
itself can define variable values within it, and everything "just
works" as you would expect. This also means variable strings can
include other variables inside of those strings.
Library
A collection of modules made available to :command:`/usr/bin/ansible`
or an Ansible :term:`playbook <playbooks>`.
Limit Groups
By passing ``--limit somegroup`` to :command:`ansible` or
:command:`ansible-playbook`, the commands can be limited to a subset
of :term:`hosts <Host>`. For instance, this can be used to run
a :term:`playbook <playbooks>` that normally targets an entire set of
servers to one particular server.
Local Action
This keyword is an alias for ``delegate_to: localhost``.
Used when you want to redirect an action from the remote to
execute on the controller itself.
Local Connection
By using ``connection: local`` in a :term:`playbook <playbooks>`, or
passing ``-c local`` to :command:`/usr/bin/ansible`, this indicates
that we are executing a local fork instead of executing on the remote machine.
You probably want ``local_action`` or ``delegate_to: localhost`` instead
as this ONLY changes the connection and no other context for execution.
Lookup Plugin
A lookup plugin is a way to get data into Ansible from the outside world.
Lookup plugins are an extension of Jinja2 and can be accessed in templates, for example,
``{{ lookup('file','/path/to/file') }}``.
These are how such things as ``with_items``, are implemented.
There are also lookup plugins like ``file`` which loads data from
a file and ones for querying environment variables, DNS text records,
or key value stores.
Loops
Generally, Ansible is not a programming language. It prefers to be
more declarative, though various constructs like ``loop`` allow
a particular task to be repeated for multiple items in a list.
Certain modules, like :ref:`yum <yum_module>` and :ref:`apt <apt_module>`, actually take
lists directly, and can install all packages given in those lists
within a single transaction, dramatically speeding up total time to
configuration, so they can be used without loops.
Modules
Modules are the units of work that Ansible ships out to remote
machines. Modules are kicked off by either
:command:`/usr/bin/ansible` or :command:`/usr/bin/ansible-playbook`
(where multiple tasks use lots of different modules in conjunction).
Modules can be implemented in any language, including Perl, Bash, or
Ruby -- but can take advantage of some useful communal library code if written
in Python. Modules just have to return :term:`JSON`. Once modules are
executed on remote machines, they are removed, so no long running
daemons are used. Ansible refers to the collection of available
modules as a :term:`library`.
Multi-Tier
The concept that IT systems are not managed one system at a time, but
by interactions between multiple systems and groups of systems in
well defined orders. For instance, a web server may need to be
updated before a database server and pieces on the web server may
need to be updated after *THAT* database server and various load
balancers and monitoring servers may need to be contacted. Ansible
models entire IT topologies and workflows rather than looking at
configuration from a "one system at a time" perspective.
Namespace
The first part of a fully qualified collection name, the namespace usually reflects a functional content category. Example: in ``cisco.ios.ios_config``, ``cisco`` is the namespace. Namespaces are reserved and distributed by Red Hat at Red Hat's discretion. Many, but not all, namespaces will correspond with vendor names. See `Galaxy namespaces <https://galaxy.ansible.com/docs/contributing/namespaces.html#galaxy-namespaces>`_ on the Galaxy docsite for namespace requirements.
Notify
The act of a :term:`task <tasks>` registering a change event and
informing a :term:`handler <handlers>` task that another
:term:`action` needs to be run at the end of the :term:`play <plays>`. If
a handler is notified by multiple tasks, it will still be run only
once. Handlers are run in the order they are listed, not in the order
that they are notified.
Orchestration
Many software automation systems use this word to mean different
things. Ansible uses it as a conductor would conduct an orchestra.
A datacenter or cloud architecture is full of many systems, playing
many parts -- web servers, database servers, maybe load balancers,
monitoring systems, continuous integration systems, and so on. In
performing any process, it is necessary to touch systems in particular
orders, often to simulate rolling updates or to deploy software
correctly. Some system may perform some steps, then others, then
previous systems already processed may need to perform more steps.
Along the way, emails may need to be sent or web services contacted.
Ansible orchestration is all about modeling that kind of process.
paramiko
By default, Ansible manages machines over SSH. The library that
Ansible uses by default to do this is a Python-powered library called
paramiko. The paramiko library is generally fast and easy to manage,
though users who want to use Kerberos or Jump Hosts may wish to switch
to a native SSH binary such as OpenSSH by specifying the connection
type in their :term:`playbooks`, or using the ``-c ssh`` flag.
Playbooks
Playbooks are the language by which Ansible orchestrates, configures,
administers, or deploys systems. They are called playbooks partially
because it's a sports analogy, and it's supposed to be fun using them.
They aren't workbooks :)
Plays
A :term:`playbook <playbooks>` is a list of plays. A play is
minimally a mapping between a set of :term:`hosts <Host>` selected by a host
specifier (usually chosen by :term:`groups <Group>` but sometimes by
hostname :term:`globs <Globbing>`) and the :term:`tasks` which run on those
hosts to define the role that those systems will perform. There can be
one or many plays in a playbook.
Pull Mode
By default, Ansible runs in :term:`push mode`, which allows it very
fine-grained control over when it talks to each system. Pull mode is
provided for when you would rather have nodes check in every N minutes
on a particular schedule. It uses a program called
:command:`ansible-pull` and can also be set up (or reconfigured) using
a push-mode :term:`playbook <playbooks>`. Most Ansible users use push
mode, but pull mode is included for variety and the sake of having
choices.
:command:`ansible-pull` works by checking configuration orders out of
git on a crontab and then managing the machine locally, using the
:term:`local connection` plugin.
Pulp 3 Galaxy
A self-hosted distribution server based on the `GalaxyNG codebase <https://galaxyng.netlify.app/>`_, based on Pulp version 3. Use it to find and share your own curated set of content. You can access your content with the ``ansible-galaxy collection`` command.
Push Mode
Push mode is the default mode of Ansible. In fact, it's not really
a mode at all -- it's just how Ansible works when you aren't thinking
about it. Push mode allows Ansible to be fine-grained and conduct
nodes through complex orchestration processes without waiting for them
to check in.
Register Variable
The result of running any :term:`task <tasks>` in Ansible can be
stored in a variable for use in a template or a conditional statement.
The keyword used to define the variable is called ``register``, taking
its name from the idea of registers in assembly programming (though
Ansible will never feel like assembly programming). There are an
infinite number of variable names you can use for registration.
Resource Model
Ansible modules work in terms of resources. For instance, the
:ref:`file module <file_module>` will select a particular file and ensure
that the attributes of that resource match a particular model. As an
example, we might wish to change the owner of :file:`/etc/motd` to
``root`` if it is not already set to ``root``, or set its mode to
``0644`` if it is not already set to ``0644``. The resource models
are :term:`idempotent <idempotency>` meaning change commands are not
run unless needed, and Ansible will bring the system back to a desired
state regardless of the actual state -- rather than you having to tell
it how to get to the state.
Roles
Roles are units of organization in Ansible. Assigning a role to
a group of :term:`hosts <Host>` (or a set of :term:`groups <group>`,
or :term:`host patterns <Globbing>`, and so on) implies that they should
implement a specific behavior. A role may include applying certain
variable values, certain :term:`tasks`, and certain :term:`handlers`
-- or just one or more of these things. Because of the file structure
associated with a role, roles become redistributable units that allow
you to share behavior among :term:`playbooks` -- or even with other users.
Rolling Update
The act of addressing a number of nodes in a group N at a time to
avoid updating them all at once and bringing the system offline. For
instance, in a web topology of 500 nodes handling very large volume,
it may be reasonable to update 10 or 20 machines at a time, moving on
to the next 10 or 20 when done. The ``serial:`` keyword in an Ansible
:term:`playbooks` control the size of the rolling update pool. The
default is to address the batch size all at once, so this is something
that you must opt-in to. OS configuration (such as making sure config
files are correct) does not typically have to use the rolling update
model, but can do so if desired.
Serial
.. seealso::
:term:`Rolling Update`
Sudo
Ansible does not require root logins, and since it's daemonless,
definitely does not require root level daemons (which can be
a security concern in sensitive environments). Ansible can log in and
perform many operations wrapped in a sudo command, and can work with
both password-less and password-based sudo. Some operations that
don't normally work with sudo (like scp file transfer) can be achieved
with Ansible's :ref:`copy <copy_module>`, :ref:`template <template_module>`, and
:ref:`fetch <fetch_module>` modules while running in sudo mode.
SSH (Native)
Native OpenSSH as an Ansible transport is specified with ``-c ssh``
(or a config file, or a keyword in the :term:`playbook <playbooks>`)
and can be useful if wanting to login via Kerberized SSH or using SSH
jump hosts, and so on. In 1.2.1, ``ssh`` will be used by default if the
OpenSSH binary on the control machine is sufficiently new.
Previously, Ansible selected ``paramiko`` as a default. Using
a client that supports ``ControlMaster`` and ``ControlPersist`` is
recommended for maximum performance -- if you don't have that and
don't need Kerberos, jump hosts, or other features, ``paramiko`` is
a good choice. Ansible will warn you if it doesn't detect
ControlMaster/ControlPersist capability.
Tags
Ansible allows tagging resources in a :term:`playbook <playbooks>`
with arbitrary keywords, and then running only the parts of the
playbook that correspond to those keywords. For instance, it is
possible to have an entire OS configuration, and have certain steps
labeled ``ntp``, and then run just the ``ntp`` steps to reconfigure
the time server information on a remote host.
Task
:term:`Playbooks` exist to run tasks. Tasks combine an :term:`action`
(a module and its arguments) with a name and optionally some other
keywords (like :term:`looping keywords <loops>`). :term:`Handlers`
are also tasks, but they are a special kind of task that do not run
unless they are notified by name when a task reports an underlying
change on a remote system.
Tasks
A list of :term:`Task`.
Templates
Ansible can easily transfer files to remote systems but often it is
desirable to substitute variables in other files. Variables may come
from the :term:`inventory` file, :term:`Host Vars`, :term:`Group
Vars`, or :term:`Facts`. Templates use the :term:`Jinja2` template
engine and can also include logical constructs like loops and if
statements.
Transport
Ansible uses :term:``Connection Plugins`` to define types of available
transports. These are simply how Ansible will reach out to managed
systems. Transports included are :term:`paramiko`,
:term:`ssh <SSH (Native)>` (using OpenSSH), and
:term:`local <Local Connection>`.
When
An optional conditional statement attached to a :term:`task <tasks>` that is used to
determine if the task should run or not. If the expression following
the ``when:`` keyword evaluates to false, the task will be ignored.
Vars (Variables)
As opposed to :term:`Facts`, variables are names of values (they can
be simple scalar values -- integers, booleans, strings) or complex
ones (dictionaries/hashes, lists) that can be used in templates and
:term:`playbooks`. They are declared things, not things that are
inferred from the remote system's current state or nature (which is
what Facts are).
YAML
Ansible does not want to force people to write programming language
code to automate infrastructure, so Ansible uses YAML to define
:term:`playbook <playbooks>` configuration languages and also variable
files. YAML is nice because it has a minimum of syntax and is very
clean and easy for people to skim. It is a good data format for
configuration files and humans, but also machine readable. Ansible's
usage of YAML stemmed from Michael DeHaan's first use of it inside of
Cobbler around 2006. YAML is fairly popular in the dynamic language
community and the format has libraries available for serialization in
many languages (Python, Perl, Ruby, and so on).
.. seealso::
:ref:`ansible_faq`
Frequently asked questions
:ref:`working_with_playbooks`
An introduction to playbooks
:ref:`playbooks_best_practices`
Tips and tricks for playbooks
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,001 |
Docs: Replace Latin terms in the reference_appendices/ directory with English terms
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/porting_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/porting_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79001
|
https://github.com/ansible/ansible/pull/79010
|
8f4133b514f1b4c8b528771804b31ff47a4e0f84
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
| 2022-10-03T20:03:36Z |
python
| 2022-10-04T09:33:40Z |
docs/docsite/rst/reference_appendices/python_3_support.rst
|
================
Python 3 Support
================
Ansible 2.5 and above work with Python 3. Previous to 2.5, using Python 3 was
considered a tech preview. This topic discusses how to set up your controller and managed machines
to use Python 3.
.. note:: On the controller we support Python 3.5 or greater and Python 2.7 or greater. Module-side, we support Python 3.5 or greater and Python 2.6 or greater.
On the controller side
----------------------
The easiest way to run :command:`/usr/bin/ansible` under Python 3 is to install it with the Python3
version of pip. This will make the default :command:`/usr/bin/ansible` run with Python3:
.. code-block:: shell
$ pip3 install ansible
$ ansible --version | grep "python version"
python version = 3.6.2 (default, Sep 22 2017, 08:28:09) [GCC 7.2.1 20170915 (Red Hat 7.2.1-2)]
If you are running Ansible :ref:`from_source` and want to use Python 3 with your source checkout, run your
command via ``python3``. For example:
.. code-block:: shell
$ source ./hacking/env-setup
$ python3 $(which ansible) localhost -m ping
$ python3 $(which ansible-playbook) sample-playbook.yml
.. note:: Individual Linux distribution packages may be packaged for Python2 or Python3. When running from
distro packages you'll only be able to use Ansible with the Python version for which it was
installed. Sometimes distros will provide a means of installing for several Python versions
(via a separate package or via some commands that are run after install). You'll need to check
with your distro to see if that applies in your case.
Using Python 3 on the managed machines with commands and playbooks
------------------------------------------------------------------
* Ansible will automatically detect and use Python 3 on many platforms that ship with it. To explicitly configure a
Python 3 interpreter, set the ``ansible_python_interpreter`` inventory variable at a group or host level to the
location of a Python 3 interpreter, such as :command:`/usr/bin/python3`. The default interpreter path may also be
set in ``ansible.cfg``.
.. seealso:: :ref:`interpreter_discovery` for more information.
.. code-block:: ini
# Example inventory that makes an alias for localhost that uses Python3
localhost-py3 ansible_host=localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python3
# Example of setting a group of hosts to use Python3
[py3_hosts]
ubuntu16
fedora27
[py3_hosts:vars]
ansible_python_interpreter=/usr/bin/python3
.. seealso:: :ref:`intro_inventory` for more information.
* Run your command or playbook:
.. code-block:: shell
$ ansible localhost-py3 -m ping
$ ansible-playbook sample-playbook.yml
Note that you can also use the `-e` command line option to manually
set the python interpreter when you run a command. This can be useful if you want to test whether
a specific module or playbook has any bugs under Python 3. For example:
.. code-block:: shell
$ ansible localhost -m ping -e 'ansible_python_interpreter=/usr/bin/python3'
$ ansible-playbook sample-playbook.yml -e 'ansible_python_interpreter=/usr/bin/python3'
What to do if an incompatibility is found
-----------------------------------------
We have spent several releases squashing bugs and adding new tests so that Ansible's core feature
set runs under both Python 2 and Python 3. However, bugs may still exist in edge cases and many of
the modules shipped with Ansible are maintained by the community and not all of those may be ported
yet.
If you find a bug running under Python 3 you can submit a bug report on `Ansible's GitHub project
<https://github.com/ansible/ansible/issues/>`_. Be sure to mention Python3 in the bug report so
that the right people look at it.
If you would like to fix the code and submit a pull request on github, you can
refer to :ref:`developing_python_3` for information on how we fix
common Python3 compatibility issues in the Ansible codebase.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,001 |
Docs: Replace Latin terms in the reference_appendices/ directory with English terms
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/porting_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/porting_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/reference_appendices/index.rst
### Ansible Version
```console
$ ansible --version
none
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79001
|
https://github.com/ansible/ansible/pull/79010
|
8f4133b514f1b4c8b528771804b31ff47a4e0f84
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
| 2022-10-03T20:03:36Z |
python
| 2022-10-04T09:33:40Z |
docs/docsite/rst/reference_appendices/special_variables.rst
|
.. _special_variables:
Special Variables
=================
Magic variables
---------------
These variables cannot be set directly by the user; Ansible will always override them to reflect internal state.
ansible_check_mode
Boolean that indicates if we are in check mode or not
ansible_config_file
The full path of used Ansible configuration file
ansible_dependent_role_names
The names of the roles currently imported into the current play as dependencies of other plays
ansible_diff_mode
Boolean that indicates if we are in diff mode or not
ansible_forks
Integer reflecting the number of maximum forks available to this run
ansible_inventory_sources
List of sources used as inventory
ansible_limit
Contents of the ``--limit`` CLI option for the current execution of Ansible
ansible_loop
A dictionary/map containing extended loop information when enabled via ``loop_control.extended``
ansible_loop_var
The name of the value provided to ``loop_control.loop_var``. Added in ``2.8``
ansible_index_var
The name of the value provided to ``loop_control.index_var``. Added in ``2.9``
ansible_parent_role_names
When the current role is being executed by means of an :ref:`include_role <include_role_module>` or :ref:`import_role <import_role_module>` action, this variable contains a list of all parent roles, with the most recent role (in other words, the role that included/imported this role) being the first item in the list.
When multiple inclusions occur, this list lists the *last* role (in other words, the role that included this role) as the *first* item in the list. It is also possible that a specific role exists more than once in this list.
For example: When role **A** includes role **B**, inside role B, ``ansible_parent_role_names`` will equal to ``['A']``. If role **B** then includes role **C**, the list becomes ``['B', 'A']``.
ansible_parent_role_paths
When the current role is being executed by means of an :ref:`include_role <include_role_module>` or :ref:`import_role <import_role_module>` action, this variable contains a list of all parent roles paths, with the most recent role (in other words, the role that included/imported this role) being the first item in the list.
Please refer to ``ansible_parent_role_names`` for the order of items in this list.
ansible_play_batch
List of active hosts in the current play run limited by the serial, aka 'batch'. Failed/Unreachable hosts are not considered 'active'.
ansible_play_hosts
List of hosts in the current play run, not limited by the serial. Failed/Unreachable hosts are excluded from this list.
ansible_play_hosts_all
List of all the hosts that were targeted by the play
ansible_play_role_names
The names of the roles currently imported into the current play. This list does **not** contain the role names that are
implicitly included via dependencies.
ansible_playbook_python
The path to the python interpreter being used by Ansible on the controller
ansible_role_names
The names of the roles currently imported into the current play, or roles referenced as dependencies of the roles
imported into the current play.
ansible_role_name
The fully qualified collection role name, in the format of ``namespace.collection.role_name``
ansible_collection_name
The name of the collection the task that is executing is a part of. In the format of ``namespace.collection``
ansible_run_tags
Contents of the ``--tags`` CLI option, which specifies which tags will be included for the current run. Note that if ``--tags`` is not passed, this variable will default to ``["all"]``.
ansible_search_path
Current search path for action plugins and lookups, in other words, where we search for relative paths when you do ``template: src=myfile``
ansible_skip_tags
Contents of the ``--skip-tags`` CLI option, which specifies which tags will be skipped for the current run.
ansible_verbosity
Current verbosity setting for Ansible
ansible_version
Dictionary/map that contains information about the current running version of ansible, it has the following keys: full, major, minor, revision and string.
group_names
List of groups the current host is part of
groups
A dictionary/map with all the groups in inventory and each group has the list of hosts that belong to it
hostvars
A dictionary/map with all the hosts in inventory and variables assigned to them
inventory_hostname
The inventory name for the 'current' host being iterated over in the play
inventory_hostname_short
The short version of `inventory_hostname`
inventory_dir
The directory of the inventory source in which the `inventory_hostname` was first defined
inventory_file
The file name of the inventory source in which the `inventory_hostname` was first defined
omit
Special variable that allows you to 'omit' an option in a task, for example ``- user: name=bob home={{ bobs_home|default(omit) }}``
play_hosts
Deprecated, the same as ansible_play_batch
ansible_play_name
The name of the currently executed play. Added in ``2.8``. (`name` attribute of the play, not file name of the playbook.)
playbook_dir
The path to the directory of the current playbook being executed. NOTE: This might be different than directory of the playbook passed to the ``ansible-playbook`` command line when a playbook contains a ``import_playbook`` statement.
role_name
The name of the role currently being executed.
role_names
Deprecated, the same as ansible_play_role_names
role_path
The path to the dir of the currently running role
Facts
-----
These are variables that contain information pertinent to the current host (`inventory_hostname`). They are only available if gathered first. See :ref:`vars_and_facts` for more information.
ansible_facts
Contains any facts gathered or cached for the `inventory_hostname`
Facts are normally gathered by the :ref:`setup <setup_module>` module automatically in a play, but any module can return facts.
ansible_local
Contains any 'local facts' gathered or cached for the `inventory_hostname`.
The keys available depend on the custom facts created.
See the :ref:`setup <setup_module>` module and :ref:`local_facts` for more details.
.. _connection_variables:
Connection variables
---------------------
Connection variables are normally used to set the specifics on how to execute actions on a target. Most of them correspond to connection plugins, but not all are specific to them; other plugins like shell, terminal and become are normally involved.
Only the common ones are described as each connection/become/shell/etc plugin can define its own overrides and specific variables.
See :ref:`general_precedence_rules` for how connection variables interact with :ref:`configuration settings<ansible_configuration_settings>`, :ref:`command-line options<command_line_tools>`, and :ref:`playbook keywords<playbook_keywords>`.
ansible_become_user
The user Ansible 'becomes' after using privilege escalation. This must be available to the 'login user'.
ansible_connection
The connection plugin actually used for the task on the target host.
ansible_host
The ip/name of the target host to use instead of `inventory_hostname`.
ansible_python_interpreter
The path to the Python executable Ansible should use on the target host.
ansible_user
The user Ansible 'logs in' as.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,002 |
Docs: Replace Latin terms with English in the scenario_guides directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/scenario_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/scenario_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79002
|
https://github.com/ansible/ansible/pull/79008
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
|
367cdae3b279a5281a56808827af27c8883a4ad4
| 2022-10-03T20:07:19Z |
python
| 2022-10-04T09:35:45Z |
docs/docsite/rst/scenario_guides/guide_azure.rst
|
Microsoft Azure Guide
=====================
.. important::
Red Hat Ansible Automation Platform will soon be available on Microsoft Azure. `Sign up to preview the experience <https://www.redhat.com/en/engage/ansible-microsoft-azure-e-202110220735>`_.
Ansible includes a suite of modules for interacting with Azure Resource Manager, giving you the tools to easily create
and orchestrate infrastructure on the Microsoft Azure Cloud.
Requirements
------------
Using the Azure Resource Manager modules requires having specific Azure SDK modules
installed on the host running Ansible.
.. code-block:: bash
$ pip install 'ansible[azure]'
If you are running Ansible from source, you can install the dependencies from the
root directory of the Ansible repo.
.. code-block:: bash
$ pip install .[azure]
You can also directly run Ansible in `Azure Cloud Shell <https://shell.azure.com>`_, where Ansible is pre-installed.
Authenticating with Azure
-------------------------
Using the Azure Resource Manager modules requires authenticating with the Azure API. You can choose from two authentication strategies:
* Active Directory Username/Password
* Service Principal Credentials
Follow the directions for the strategy you wish to use, then proceed to `Providing Credentials to Azure Modules`_ for
instructions on how to actually use the modules and authenticate with the Azure API.
Using Service Principal
.......................
There is now a detailed official tutorial describing `how to create a service principal <https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal>`_.
After stepping through the tutorial you will have:
* Your Client ID, which is found in the "client id" box in the "Configure" page of your application in the Azure portal
* Your Secret key, generated when you created the application. You cannot show the key after creation.
If you lost the key, you must create a new one in the "Configure" page of your application.
* And finally, a tenant ID. It's a UUID (for example, ABCDEFGH-1234-ABCD-1234-ABCDEFGHIJKL) pointing to the AD containing your
application. You will find it in the URL from within the Azure portal, or in the "view endpoints" of any given URL.
Using Active Directory Username/Password
........................................
To create an Active Directory username/password:
* Connect to the Azure Classic Portal with your admin account
* Create a user in your default AAD. You must NOT activate Multi-Factor Authentication
* Go to Settings - Administrators
* Click on Add and enter the email of the new user.
* Check the checkbox of the subscription you want to test with this user.
* Login to Azure Portal with this new user to change the temporary password to a new one. You will not be able to use the
temporary password for OAuth login.
Providing Credentials to Azure Modules
......................................
The modules offer several ways to provide your credentials. For a CI/CD tool such as Ansible AWX or Jenkins, you will
most likely want to use environment variables. For local development you may wish to store your credentials in a file
within your home directory. And of course, you can always pass credentials as parameters to a task within a playbook. The
order of precedence is parameters, then environment variables, and finally a file found in your home directory.
Using Environment Variables
```````````````````````````
To pass service principal credentials via the environment, define the following variables:
* AZURE_CLIENT_ID
* AZURE_SECRET
* AZURE_SUBSCRIPTION_ID
* AZURE_TENANT
To pass Active Directory username/password via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_SUBSCRIPTION_ID
To pass Active Directory username/password in ADFS via the environment, define the following variables:
* AZURE_AD_USER
* AZURE_PASSWORD
* AZURE_CLIENT_ID
* AZURE_TENANT
* AZURE_ADFS_AUTHORITY_URL
"AZURE_ADFS_AUTHORITY_URL" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Storing in a File
`````````````````
When working in a development environment, it may be desirable to store credentials in a file. The modules will look
for credentials in ``$HOME/.azure/credentials``. This file is an ini style file. It will look as follows:
.. code-block:: ini
[default]
subscription_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
client_id=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
secret=xxxxxxxxxxxxxxxxx
tenant=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
.. note:: If your secret values contain non-ASCII characters, you must `URL Encode <https://www.w3schools.com/tags/ref_urlencode.asp>`_ them to avoid login errors.
It is possible to store multiple sets of credentials within the credentials file by creating multiple sections. Each
section is considered a profile. The modules look for the [default] profile automatically. Define AZURE_PROFILE in the
environment or pass a profile parameter to specify a specific profile.
Passing as Parameters
`````````````````````
If you wish to pass credentials as parameters to a task, use the following parameters for service principal:
* client_id
* secret
* subscription_id
* tenant
Or, pass the following parameters for Active Directory username/password:
* ad_user
* password
* subscription_id
Or, pass the following parameters for ADFS username/password:
* ad_user
* password
* client_id
* tenant
* adfs_authority_url
"adfs_authority_url" is optional. It's necessary only when you have own ADFS authority like https://yourdomain.com/adfs.
Other Cloud Environments
------------------------
To use an Azure Cloud other than the default public cloud (for example, Azure China Cloud, Azure US Government Cloud, Azure Stack),
pass the "cloud_environment" argument to modules, configure it in a credential profile, or set the "AZURE_CLOUD_ENVIRONMENT"
environment variable. The value is either a cloud name as defined by the Azure Python SDK (for example, "AzureChinaCloud",
"AzureUSGovernment"; defaults to "AzureCloud") or an Azure metadata discovery URL (for Azure Stack).
Creating Virtual Machines
-------------------------
There are two ways to create a virtual machine, both involving the azure_rm_virtualmachine module. We can either create
a storage account, network interface, security group and public IP address and pass the names of these objects to the
module as parameters, or we can let the module do the work for us and accept the defaults it chooses.
Creating Individual Components
..............................
An Azure module is available to help you create a storage account, virtual network, subnet, network interface,
security group and public IP. Here is a full example of creating each of these and passing the names to the
``azure.azcollection.azure_rm_virtualmachine`` module at the end:
.. code-block:: yaml
- name: Create storage account
azure.azcollection.azure_rm_storageaccount:
resource_group: Testing
name: testaccount001
account_type: Standard_LRS
- name: Create virtual network
azure.azcollection.azure_rm_virtualnetwork:
resource_group: Testing
name: testvn001
address_prefixes: "10.10.0.0/16"
- name: Add subnet
azure.azcollection.azure_rm_subnet:
resource_group: Testing
name: subnet001
address_prefix: "10.10.0.0/24"
virtual_network: testvn001
- name: Create public ip
azure.azcollection.azure_rm_publicipaddress:
resource_group: Testing
allocation_method: Static
name: publicip001
- name: Create security group that allows SSH
azure.azcollection.azure_rm_securitygroup:
resource_group: Testing
name: secgroup001
rules:
- name: SSH
protocol: Tcp
destination_port_range: 22
access: Allow
priority: 101
direction: Inbound
- name: Create NIC
azure.azcollection.azure_rm_networkinterface:
resource_group: Testing
name: testnic001
virtual_network: testvn001
subnet: subnet001
public_ip_name: publicip001
security_group: secgroup001
- name: Create virtual machine
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm001
vm_size: Standard_D1
storage_account: testaccount001
storage_container: testvm001
storage_blob: testvm001.vhd
admin_username: admin
admin_password: Password!
network_interfaces: testnic001
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Each of the Azure modules offers a variety of parameter options. Not all options are demonstrated in the above example.
See each individual module for further details and examples.
Creating a Virtual Machine with Default Options
...............................................
If you simply want to create a virtual machine without specifying all the details, you can do that as well. The only
caveat is that you will need a virtual network with one subnet already in your resource group. Assuming you have a
virtual network already with an existing subnet, you can run the following to create a VM:
.. code-block:: yaml
azure.azcollection.azure_rm_virtualmachine:
resource_group: Testing
name: testvm10
vm_size: Standard_D1
admin_username: chouseknecht
ssh_password_enabled: false
ssh_public_keys: "{{ ssh_keys }}"
image:
offer: CentOS
publisher: OpenLogic
sku: '7.1'
version: latest
Creating a Virtual Machine in Availability Zones
..................................................
If you want to create a VM in an availability zone,
consider the following:
* Both OS disk and data disk must be a 'managed disk', not an 'unmanaged disk'.
* When creating a VM with the ``azure.azcollection.azure_rm_virtualmachine`` module,
you need to explicitly set the ``managed_disk_type`` parameter
to change the OS disk to a managed disk.
Otherwise, the OS disk becomes an unmanaged disk.
* When you create a data disk with the ``azure.azcollection.azure_rm_manageddisk`` module,
you need to explicitly specify the ``storage_account_type`` parameter
to make it a managed disk.
Otherwise, the data disk will be an unmanaged disk.
* A managed disk does not require a storage account or a storage container,
unlike an unmanaged disk.
In particular, note that once a VM is created on an unmanaged disk,
an unnecessary storage container named "vhds" is automatically created.
* When you create an IP address with the ``azure.azcollection.azure_rm_publicipaddress`` module,
you must set the ``sku`` parameter to ``standard``.
Otherwise, the IP address cannot be used in an availability zone.
Dynamic Inventory Script
------------------------
If you are not familiar with Ansible's dynamic inventory scripts, check out :ref:`Intro to Dynamic Inventory <intro_dynamic_inventory>`.
The Azure Resource Manager inventory script is called `azure_rm.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py>`_. It authenticates with the Azure API exactly the same as the
Azure modules, which means you will either define the same environment variables described above in `Using Environment Variables`_,
create a ``$HOME/.azure/credentials`` file (also described above in `Storing in a File`_), or pass command line parameters. To see available command
line options execute the following:
.. code-block:: bash
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
$ ./azure_rm.py --help
As with all dynamic inventory scripts, the script can be executed directly, passed as a parameter to the ansible command,
or passed directly to ansible-playbook using the -i option. No matter how it is executed the script produces JSON representing
all of the hosts found in your Azure subscription. You can narrow this down to just hosts found in a specific set of
Azure resource groups, or even down to a specific host.
For a given host, the inventory script provides the following host variables:
.. code-block:: JSON
{
"ansible_host": "XXX.XXX.XXX.XXX",
"computer_name": "computer_name2",
"fqdn": null,
"id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Compute/virtualMachines/object-name",
"image": {
"offer": "CentOS",
"publisher": "OpenLogic",
"sku": "7.1",
"version": "latest"
},
"location": "westus",
"mac_address": "00-00-5E-00-53-FE",
"name": "object-name",
"network_interface": "interface-name",
"network_interface_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkInterfaces/object-name1",
"network_security_group": null,
"network_security_group_id": null,
"os_disk": {
"name": "object-name",
"operating_system_type": "Linux"
},
"plan": null,
"powerstate": "running",
"private_ip": "172.26.3.6",
"private_ip_alloc_method": "Static",
"provisioning_state": "Succeeded",
"public_ip": "XXX.XXX.XXX.XXX",
"public_ip_alloc_method": "Static",
"public_ip_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/publicIPAddresses/object-name",
"public_ip_name": "object-name",
"resource_group": "galaxy-production",
"security_group": "object-name",
"security_group_id": "/subscriptions/subscription-id/resourceGroups/galaxy-production/providers/Microsoft.Network/networkSecurityGroups/object-name",
"tags": {
"db": "mysql"
},
"type": "Microsoft.Compute/virtualMachines",
"virtual_machine_size": "Standard_DS4"
}
Host Groups
...........
By default hosts are grouped by:
* azure (all hosts)
* location name
* resource group name
* security group name
* tag key
* tag key_value
* os_disk operating_system_type (Windows/Linux)
You can control host groupings and host selection by either defining environment variables or creating an
azure_rm.ini file in your current working directory.
NOTE: An .ini file will take precedence over environment variables.
NOTE: The name of the .ini file is the basename of the inventory script (in other words, 'azure_rm') with a '.ini'
extension. This allows you to copy, rename and customize the inventory script and have matching .ini files all in
the same directory.
Control grouping using the following variables defined in the environment:
* AZURE_GROUP_BY_RESOURCE_GROUP=yes
* AZURE_GROUP_BY_LOCATION=yes
* AZURE_GROUP_BY_SECURITY_GROUP=yes
* AZURE_GROUP_BY_TAG=yes
* AZURE_GROUP_BY_OS_FAMILY=yes
Select hosts within specific resource groups by assigning a comma separated list to:
* AZURE_RESOURCE_GROUPS=resource_group_a,resource_group_b
Select hosts for specific tag key by assigning a comma separated list of tag keys to:
* AZURE_TAGS=key1,key2,key3
Select hosts for specific locations by assigning a comma separated list of locations to:
* AZURE_LOCATIONS=eastus,eastus2,westus
Or, select hosts for specific tag key:value pairs by assigning a comma separated list key:value pairs to:
* AZURE_TAGS=key1:value1,key2:value2
If you don't need the powerstate, you can improve performance by turning off powerstate fetching:
* AZURE_INCLUDE_POWERSTATE=no
A sample azure_rm.ini file is included along with the inventory script in
`here <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.ini>`_.
An .ini file will contain the following:
.. code-block:: ini
[azure]
# Control which resource groups are included. By default all resources groups are included.
# Set resource_groups to a comma separated list of resource groups names.
#resource_groups=
# Control which tags are included. Set tags to a comma separated list of keys or key:value pairs
#tags=
# Control which locations are included. Set locations to a comma separated list of locations.
#locations=
# Include powerstate. If you don't need powerstate information, turning it off improves runtime performance.
# Valid values: yes, no, true, false, True, False, 0, 1.
include_powerstate=yes
# Control grouping with the following boolean flags. Valid values: yes, no, true, false, True, False, 0, 1.
group_by_resource_group=yes
group_by_location=yes
group_by_security_group=yes
group_by_tag=yes
group_by_os_family=yes
Examples
........
Here are some examples using the inventory script:
.. code-block:: bash
# Download inventory script
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/azure_rm.py
# Execute /bin/uname on all instances in the Testing resource group
$ ansible -i azure_rm.py Testing -m shell -a "/bin/uname -a"
# Execute win_ping on all Windows instances
$ ansible -i azure_rm.py windows -m win_ping
# Execute ping on all Linux instances
$ ansible -i azure_rm.py linux -m ping
# Use the inventory script to print instance specific information
$ ./azure_rm.py --host my_instance_host_name --resource-groups=Testing --pretty
# Use the inventory script with ansible-playbook
$ ansible-playbook -i ./azure_rm.py test_playbook.yml
Here is a simple playbook to exercise the Azure inventory script:
.. code-block:: yaml
- name: Test the inventory script
hosts: azure
connection: local
gather_facts: false
tasks:
- debug:
msg: "{{ inventory_hostname }} has powerstate {{ powerstate }}"
You can execute the playbook with something like:
.. code-block:: bash
$ ansible-playbook -i ./azure_rm.py test_azure_inventory.yml
Disabling certificate validation on Azure endpoints
...................................................
When an HTTPS proxy is present, or when using Azure Stack, it may be necessary to disable certificate validation for
Azure endpoints in the Azure modules. This is not a recommended security practice, but may be necessary when the system
CA store cannot be altered to include the necessary CA certificate. Certificate validation can be controlled by setting
the "cert_validation_mode" value in a credential profile, via the "AZURE_CERT_VALIDATION_MODE" environment variable, or
by passing the "cert_validation_mode" argument to any Azure module. The default value is "validate"; setting the value
to "ignore" will prevent all certificate validation. The module argument takes precedence over a credential profile value,
which takes precedence over the environment value.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,002 |
Docs: Replace Latin terms with English in the scenario_guides directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/scenario_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/scenario_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79002
|
https://github.com/ansible/ansible/pull/79008
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
|
367cdae3b279a5281a56808827af27c8883a4ad4
| 2022-10-03T20:07:19Z |
python
| 2022-10-04T09:35:45Z |
docs/docsite/rst/scenario_guides/guide_packet.rst
|
**********************************
Packet.net Guide
**********************************
Introduction
============
`Packet.net <https://packet.net>`_ is a bare metal infrastructure host that's supported by Ansible (>=2.3) via a dynamic inventory script and two cloud modules. The two modules are:
- packet_sshkey: adds a public SSH key from file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- packet_device: manages servers on Packet. You can use this module to create, restart and delete devices.
Note, this guide assumes you are familiar with Ansible and how it works. If you're not, have a look at their :ref:`docs <ansible_documentation>` before getting started.
Requirements
============
The Packet modules and inventory script connect to the Packet API using the packet-python package. You can install it with pip:
.. code-block:: bash
$ pip install packet-python
In order to check the state of devices created by Ansible on Packet, it's a good idea to install one of the `Packet CLI clients <https://www.packet.net/developers/integrations/>`_. Otherwise you can check them via the `Packet portal <https://app.packet.net/portal>`_.
To use the modules and inventory script you'll need a Packet API token. You can generate an API token via the Packet portal `here <https://app.packet.net/portal#/api-keys>`__. The simplest way to authenticate yourself is to set the Packet API token in an environment variable:
.. code-block:: bash
$ export PACKET_API_TOKEN=Bfse9F24SFtfs423Gsd3ifGsd43sSdfs
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules.
On Packet, devices and reserved IP addresses belong to `projects <https://www.packet.com/developers/api/#projects>`_. In order to use the packet_device module, you need to specify the UUID of the project in which you want to create or manage devices. You can find a project's UUID in the Packet portal `here <https://app.packet.net/portal#/projects/list/table/>`_ (it's just under the project table) or via one of the available `CLIs <https://www.packet.net/developers/integrations/>`_.
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
.. code-block:: bash
$ ssh-keygen -t rsa -f ./id_rsa
If you want to use an existing key pair, just copy the private and public key over to the playbook directory.
Device Creation
===============
The following code block is a simple playbook that creates one `Type 0 <https://www.packet.com/cloud/servers/t1-small/>`_ server (the 'plan' parameter). You have to supply 'plan' and 'operating_system'. 'location' defaults to 'ewr1' (Parsippany, NJ). You can find all the possible values for the parameters via a `CLI client <https://www.packet.net/developers/integrations/>`_.
.. code-block:: yaml
# playbook_create.yml
- name: create ubuntu device
hosts: localhost
tasks:
- packet_sshkey:
key_file: ./id_rsa.pub
label: tutorial key
- packet_device:
project_id: <your_project_id>
hostnames: myserver
operating_system: ubuntu_16_04
plan: baremetal_0
facility: sjc1
After running ``ansible-playbook playbook_create.yml``, you should have a server provisioned on Packet. You can verify via a CLI or in the `Packet portal <https://app.packet.net/portal#/projects/list/table>`__.
If you get an error with the message "failed to set machine state present, error: Error 404: Not Found", please verify your project UUID.
Updating Devices
================
The two parameters used to uniquely identify Packet devices are: "device_ids" and "hostnames". Both parameters accept either a single string (later converted to a one-element list), or a list of strings.
The 'device_ids' and 'hostnames' parameters are mutually exclusive. The following values are all acceptable:
- device_ids: a27b7a83-fc93-435b-a128-47a5b04f2dcf
- hostnames: mydev1
- device_ids: [a27b7a83-fc93-435b-a128-47a5b04f2dcf, 4887130f-0ccd-49a0-99b0-323c1ceb527b]
- hostnames: [mydev1, mydev2]
In addition, hostnames can contain a special '%d' formatter along with a 'count' parameter that lets you easily expand hostnames that follow a simple name and number pattern; in other words, ``hostnames: "mydev%d", count: 2`` will expand to [mydev1, mydev2].
If your playbook acts on existing Packet devices, you can only pass the 'hostname' and 'device_ids' parameters. The following playbook shows how you can reboot a specific Packet device by setting the 'hostname' parameter:
.. code-block:: yaml
# playbook_reboot.yml
- name: reboot myserver
hosts: localhost
tasks:
- packet_device:
project_id: <your_project_id>
hostnames: myserver
state: rebooted
You can also identify specific Packet devices with the 'device_ids' parameter. The device's UUID can be found in the `Packet Portal <https://app.packet.net/portal>`_ or by using a `CLI <https://www.packet.net/developers/integrations/>`_. The following playbook removes a Packet device using the 'device_ids' field:
.. code-block:: yaml
# playbook_remove.yml
- name: remove a device
hosts: localhost
tasks:
- packet_device:
project_id: <your_project_id>
device_ids: <myserver_device_id>
state: absent
More Complex Playbooks
======================
In this example, we'll create a CoreOS cluster with `user data <https://packet.com/developers/docs/servers/key-features/user-data/>`_.
The CoreOS cluster will use `etcd <https://etcd.io/>`_ for discovery of other servers in the cluster. Before provisioning your servers, you'll need to generate a discovery token for your cluster:
.. code-block:: bash
$ curl -w "\n" 'https://discovery.etcd.io/new?size=3'
The following playbook will create an SSH key, 3 Packet servers, and then wait until SSH is ready (or until 5 minutes passed). Make sure to substitute the discovery token URL in 'user_data', and the 'project_id' before running ``ansible-playbook``. Also, feel free to change 'plan' and 'facility'.
.. code-block:: yaml
# playbook_coreos.yml
- name: Start 3 CoreOS nodes in Packet and wait until SSH is ready
hosts: localhost
tasks:
- packet_sshkey:
key_file: ./id_rsa.pub
label: new
- packet_device:
hostnames: [coreos-one, coreos-two, coreos-three]
operating_system: coreos_beta
plan: baremetal_0
facility: ewr1
project_id: <your_project_id>
wait_for_public_IPv: 4
user_data: |
#cloud-config
coreos:
etcd2:
discovery: https://discovery.etcd.io/<token>
advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001
initial-advertise-peer-urls: http://$private_ipv4:2380
listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001
listen-peer-urls: http://$private_ipv4:2380
fleet:
public-ip: $private_ipv4
units:
- name: etcd2.service
command: start
- name: fleet.service
command: start
register: newhosts
- name: wait for ssh
wait_for:
delay: 1
host: "{{ item.public_ipv4 }}"
port: 22
state: started
timeout: 500
loop: "{{ newhosts.results[0].devices }}"
As with most Ansible modules, the default states of the Packet modules are idempotent, meaning the resources in your project will remain the same after re-runs of a playbook. Thus, we can keep the ``packet_sshkey`` module call in our playbook. If the public key is already in your Packet account, the call will have no effect.
The second module call provisions 3 Packet Type 0 (specified using the 'plan' parameter) servers in the project identified via the 'project_id' parameter. The servers are all provisioned with CoreOS beta (the 'operating_system' parameter) and are customized with cloud-config user data passed to the 'user_data' parameter.
The ``packet_device`` module has a ``wait_for_public_IPv`` that is used to specify the version of the IP address to wait for (valid values are ``4`` or ``6`` for IPv4 or IPv6). If specified, Ansible will wait until the GET API call for a device contains an Internet-routeable IP address of the specified version. When referring to an IP address of a created device in subsequent module calls, it's wise to use the ``wait_for_public_IPv`` parameter, or ``state: active`` in the packet_device module call.
Run the playbook:
.. code-block:: bash
$ ansible-playbook playbook_coreos.yml
Once the playbook quits, your new devices should be reachable via SSH. Try to connect to one and check if etcd has started properly:
.. code-block:: bash
tomk@work $ ssh -i id_rsa core@$one_of_the_servers_ip
core@coreos-one ~ $ etcdctl cluster-health
Once you create a couple of devices, you might appreciate the dynamic inventory script...
Dynamic Inventory Script
========================
The dynamic inventory script queries the Packet API for a list of hosts, and exposes it to Ansible so you can easily identify and act on Packet devices.
You can find it in Ansible Community General Collection's git repo at `scripts/inventory/packet_net.py <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py>`_.
The inventory script is configurable through an `ini file <https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini>`_.
If you want to use the inventory script, you must first export your Packet API token to a PACKET_API_TOKEN environment variable.
You can either copy the inventory and ini config out from the cloned git repo, or you can download it to your working directory like so:
.. code-block:: bash
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.py
$ chmod +x packet_net.py
$ wget https://raw.githubusercontent.com/ansible-community/contrib-scripts/main/inventory/packet_net.ini
In order to understand what the inventory script gives to Ansible you can run:
.. code-block:: bash
$ ./packet_net.py --list
It should print a JSON document looking similar to following trimmed dictionary:
.. code-block:: json
{
"_meta": {
"hostvars": {
"147.75.64.169": {
"packet_billing_cycle": "hourly",
"packet_created_at": "2017-02-09T17:11:26Z",
"packet_facility": "ewr1",
"packet_hostname": "coreos-two",
"packet_href": "/devices/d0ab8972-54a8-4bff-832b-28549d1bec96",
"packet_id": "d0ab8972-54a8-4bff-832b-28549d1bec96",
"packet_locked": false,
"packet_operating_system": "coreos_beta",
"packet_plan": "baremetal_0",
"packet_state": "active",
"packet_updated_at": "2017-02-09T17:16:35Z",
"packet_user": "core",
"packet_userdata": "#cloud-config\ncoreos:\n etcd2:\n discovery: https://discovery.etcd.io/e0c8a4a9b8fe61acd51ec599e2a4f68e\n advertise-client-urls: http://$private_ipv4:2379,http://$private_ipv4:4001\n initial-advertise-peer-urls: http://$private_ipv4:2380\n listen-client-urls: http://0.0.0.0:2379,http://0.0.0.0:4001\n listen-peer-urls: http://$private_ipv4:2380\n fleet:\n public-ip: $private_ipv4\n units:\n - name: etcd2.service\n command: start\n - name: fleet.service\n command: start"
}
}
},
"baremetal_0": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249",
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"coreos_beta": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249",
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"ewr1": [
"147.75.64.129",
"147.75.192.51",
"147.75.64.169"
],
"sjc1": [
"147.75.202.255",
"147.75.202.251",
"147.75.202.249"
],
"coreos-two": [
"147.75.64.169"
],
"d0ab8972-54a8-4bff-832b-28549d1bec96": [
"147.75.64.169"
]
}
In the ``['_meta']['hostvars']`` key, there is a list of devices (uniquely identified by their public IPv4 address) with their parameters. The other keys under ``['_meta']`` are lists of devices grouped by some parameter. Here, it is type (all devices are of type baremetal_0), operating system, and facility (ewr1 and sjc1).
In addition to the parameter groups, there are also one-item groups with the UUID or hostname of the device.
You can now target groups in playbooks! The following playbook will install a role that supplies resources for an Ansible target into all devices in the "coreos_beta" group:
.. code-block:: yaml
# playbook_bootstrap.yml
- hosts: coreos_beta
gather_facts: false
roles:
- defunctzombie.coreos-boostrap
Don't forget to supply the dynamic inventory in the ``-i`` argument!
.. code-block:: bash
$ ansible-playbook -u core -i packet_net.py playbook_bootstrap.yml
If you have any questions or comments let us know! [email protected]
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,002 |
Docs: Replace Latin terms with English in the scenario_guides directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/scenario_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/scenario_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79002
|
https://github.com/ansible/ansible/pull/79008
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
|
367cdae3b279a5281a56808827af27c8883a4ad4
| 2022-10-03T20:07:19Z |
python
| 2022-10-04T09:35:45Z |
docs/docsite/rst/scenario_guides/guide_rax.rst
|
Rackspace Cloud Guide
=====================
.. _rax_introduction:
Introduction
````````````
.. note:: Rackspace functionality in Ansible is not maintained and users should consider the `OpenStack collection <https://galaxy.ansible.com/openstack/cloud>`_ instead.
Ansible contains a number of core modules for interacting with Rackspace Cloud.
The purpose of this section is to explain how to put Ansible modules together
(and use inventory scripts) to use Ansible in a Rackspace Cloud context.
Prerequisites for using the rax modules are minimal. In addition to ansible itself,
all of the modules require and are tested against pyrax 1.5 or higher.
You'll need this Python module installed on the execution host.
``pyrax`` is not currently available in many operating system
package repositories, so you will likely need to install it via pip:
.. code-block:: bash
$ pip install pyrax
Ansible creates an implicit localhost that executes in the same context as the ``ansible-playbook`` and the other CLI tools.
If for any reason you need or want to have it in your inventory you should do something like the following:
.. code-block:: ini
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/usr/local/bin/python2
For more information see :ref:`Implicit Localhost <implicit_localhost>`
In playbook steps, we'll typically be using the following pattern:
.. code-block:: yaml
- hosts: localhost
gather_facts: False
tasks:
.. _credentials_file:
Credentials File
````````````````
The `rax.py` inventory script and all `rax` modules support a standard `pyrax` credentials file that looks like:
.. code-block:: ini
[rackspace_cloud]
username = myraxusername
api_key = d41d8cd98f00b204e9800998ecf8427e
Setting the environment parameter ``RAX_CREDS_FILE`` to the path of this file will help Ansible find how to load
this information.
More information about this credentials file can be found at
https://github.com/pycontribs/pyrax/blob/master/docs/getting_started.md#authenticating
.. _virtual_environment:
Running from a Python Virtual Environment (Optional)
++++++++++++++++++++++++++++++++++++++++++++++++++++
Most users will not be using virtualenv, but some users, particularly Python developers sometimes like to.
There are special considerations when Ansible is installed to a Python virtualenv, rather than the default of installing at a global scope. Ansible assumes, unless otherwise instructed, that the python binary will live at /usr/bin/python. This is done via the interpreter line in modules, however when instructed by setting the inventory variable 'ansible_python_interpreter', Ansible will use this specified path instead to find Python. This can be a cause of confusion as one may assume that modules running on 'localhost', or perhaps running via 'local_action', are using the virtualenv Python interpreter. By setting this line in the inventory, the modules will execute in the virtualenv interpreter and have available the virtualenv packages, specifically pyrax. If using virtualenv, you may wish to modify your localhost inventory definition to find this location as follows:
.. code-block:: ini
[localhost]
localhost ansible_connection=local ansible_python_interpreter=/path/to/ansible_venv/bin/python
.. note::
pyrax may be installed in the global Python package scope or in a virtual environment. There are no special considerations to keep in mind when installing pyrax.
.. _provisioning:
Provisioning
````````````
Now for the fun parts.
The 'rax' module provides the ability to provision instances within Rackspace Cloud. Typically the provisioning task will be performed from your Ansible control server (in our example, localhost) against the Rackspace cloud API. This is done for several reasons:
- Avoiding installing the pyrax library on remote nodes
- No need to encrypt and distribute credentials to remote nodes
- Speed and simplicity
.. note::
Authentication with the Rackspace-related modules is handled by either
specifying your username and API key as environment variables or passing
them as module arguments, or by specifying the location of a credentials
file.
Here is a basic example of provisioning an instance in ad hoc mode:
.. code-block:: bash
$ ansible localhost -m rax -a "name=awx flavor=4 image=ubuntu-1204-lts-precise-pangolin wait=yes"
Here's what it would look like in a playbook, assuming the parameters were defined in variables:
.. code-block:: yaml
tasks:
- name: Provision a set of instances
rax:
name: "{{ rax_name }}"
flavor: "{{ rax_flavor }}"
image: "{{ rax_image }}"
count: "{{ rax_count }}"
group: "{{ group }}"
wait: true
register: rax
delegate_to: localhost
The rax module returns data about the nodes it creates, like IP addresses, hostnames, and login passwords. By registering the return value of the step, it is possible used this data to dynamically add the resulting hosts to inventory (temporarily, in memory). This facilitates performing configuration actions on the hosts in a follow-on task. In the following example, the servers that were successfully created using the above task are dynamically added to a group called "raxhosts", with each nodes hostname, IP address, and root password being added to the inventory.
.. code-block:: yaml
- name: Add the instances we created (by public IP) to the group 'raxhosts'
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
groups: raxhosts
loop: "{{ rax.success }}"
when: rax.action == 'create'
With the host group now created, the next play in this playbook could now configure servers belonging to the raxhosts group.
.. code-block:: yaml
- name: Configuration play
hosts: raxhosts
user: root
roles:
- ntp
- webserver
The method above ties the configuration of a host with the provisioning step. This isn't always what you want, and leads us
to the next section.
.. _host_inventory:
Host Inventory
``````````````
Once your nodes are spun up, you'll probably want to talk to them again. The best way to handle this is to use the "rax" inventory plugin, which dynamically queries Rackspace Cloud and tells Ansible what nodes you have to manage. You might want to use this even if you are spinning up cloud instances via other tools, including the Rackspace Cloud user interface. The inventory plugin can be used to group resources by metadata, region, OS, and so on. Utilizing metadata is highly recommended in "rax" and can provide an easy way to sort between host groups and roles. If you don't want to use the ``rax.py`` dynamic inventory script, you could also still choose to manually manage your INI inventory file, though this is less recommended.
In Ansible it is quite possible to use multiple dynamic inventory plugins along with INI file data. Just put them in a common directory and be sure the scripts are chmod +x, and the INI-based ones are not.
.. _raxpy:
rax.py
++++++
To use the Rackspace dynamic inventory script, copy ``rax.py`` into your inventory directory and make it executable. You can specify a credentials file for ``rax.py`` utilizing the ``RAX_CREDS_FILE`` environment variable.
.. note:: Dynamic inventory scripts (like ``rax.py``) are saved in ``/usr/share/ansible/inventory`` if Ansible has been installed globally. If installed to a virtualenv, the inventory scripts are installed to ``$VIRTUALENV/share/inventory``.
.. note:: Users of :ref:`ansible_platform` will note that dynamic inventory is natively supported by the controller in the platform, and all you have to do is associate a group with your Rackspace Cloud credentials, and it will easily synchronize without going through these steps::
$ RAX_CREDS_FILE=~/.raxpub ansible all -i rax.py -m setup
``rax.py`` also accepts a ``RAX_REGION`` environment variable, which can contain an individual region, or a comma separated list of regions.
When using ``rax.py``, you will not have a 'localhost' defined in the inventory.
As mentioned previously, you will often be running most of these modules outside of the host loop, and will need 'localhost' defined. The recommended way to do this, would be to create an ``inventory`` directory, and place both the ``rax.py`` script and a file containing ``localhost`` in it.
Executing ``ansible`` or ``ansible-playbook`` and specifying the ``inventory`` directory instead
of an individual file, will cause ansible to evaluate each file in that directory for inventory.
Let's test our inventory script to see if it can talk to Rackspace Cloud.
.. code-block:: bash
$ RAX_CREDS_FILE=~/.raxpub ansible all -i inventory/ -m setup
Assuming things are properly configured, the ``rax.py`` inventory script will output information similar to the
following information, which will be utilized for inventory and variables.
.. code-block:: json
{
"ORD": [
"test"
],
"_meta": {
"hostvars": {
"test": {
"ansible_host": "198.51.100.1",
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
}
}
}
}
.. _standard_inventory:
Standard Inventory
++++++++++++++++++
When utilizing a standard ini formatted inventory file (as opposed to the inventory plugin), it may still be advantageous to retrieve discoverable hostvar information from the Rackspace API.
This can be achieved with the ``rax_facts`` module and an inventory file similar to the following:
.. code-block:: ini
[test_servers]
hostname1 rax_region=ORD
hostname2 rax_region=ORD
.. code-block:: yaml
- name: Gather info about servers
hosts: test_servers
gather_facts: False
tasks:
- name: Get facts about servers
rax_facts:
credentials: ~/.raxpub
name: "{{ inventory_hostname }}"
region: "{{ rax_region }}"
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
While you don't need to know how it works, it may be interesting to know what kind of variables are returned.
The ``rax_facts`` module provides facts as following, which match the ``rax.py`` inventory script:
.. code-block:: json
{
"ansible_facts": {
"rax_accessipv4": "198.51.100.1",
"rax_accessipv6": "2001:DB8::2342",
"rax_addresses": {
"private": [
{
"addr": "192.0.2.2",
"version": 4
}
],
"public": [
{
"addr": "198.51.100.1",
"version": 4
},
{
"addr": "2001:DB8::2342",
"version": 6
}
]
},
"rax_config_drive": "",
"rax_created": "2013-11-14T20:48:22Z",
"rax_flavor": {
"id": "performance1-1",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/flavors/performance1-1",
"rel": "bookmark"
}
]
},
"rax_hostid": "e7b6961a9bd943ee82b13816426f1563bfda6846aad84d52af45a4904660cde0",
"rax_human_id": "test",
"rax_id": "099a447b-a644-471f-87b9-a7f580eb0c2a",
"rax_image": {
"id": "b211c7bf-b5b4-4ede-a8de-a4368750c653",
"links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/images/b211c7bf-b5b4-4ede-a8de-a4368750c653",
"rel": "bookmark"
}
]
},
"rax_key_name": null,
"rax_links": [
{
"href": "https://ord.servers.api.rackspacecloud.com/v2/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "self"
},
{
"href": "https://ord.servers.api.rackspacecloud.com/111111/servers/099a447b-a644-471f-87b9-a7f580eb0c2a",
"rel": "bookmark"
}
],
"rax_metadata": {
"foo": "bar"
},
"rax_name": "test",
"rax_name_attr": "name",
"rax_networks": {
"private": [
"192.0.2.2"
],
"public": [
"198.51.100.1",
"2001:DB8::2342"
]
},
"rax_os-dcf_diskconfig": "AUTO",
"rax_os-ext-sts_power_state": 1,
"rax_os-ext-sts_task_state": null,
"rax_os-ext-sts_vm_state": "active",
"rax_progress": 100,
"rax_status": "ACTIVE",
"rax_tenant_id": "111111",
"rax_updated": "2013-11-14T20:49:27Z",
"rax_user_id": "22222"
},
"changed": false
}
Use Cases
`````````
This section covers some additional usage examples built around a specific use case.
.. _network_and_server:
Network and Server
++++++++++++++++++
Create an isolated cloud network and build a server
.. code-block:: yaml
- name: Build Servers on an Isolated Network
hosts: localhost
gather_facts: False
tasks:
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
region: IAD
state: present
delegate_to: localhost
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: 2
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- my-net
region: IAD
state: present
count: 5
exact_count: true
group: web
wait: true
wait_timeout: 360
register: rax
delegate_to: localhost
.. _complete_environment:
Complete Environment
++++++++++++++++++++
Build a complete webserver environment with servers, custom networks and load balancers, install nginx and create a custom index.html
.. code-block:: yaml
---
- name: Build environment
hosts: localhost
gather_facts: False
tasks:
- name: Load Balancer create request
rax_clb:
credentials: ~/.raxpub
name: my-lb
port: 80
protocol: HTTP
algorithm: ROUND_ROBIN
type: PUBLIC
timeout: 30
region: IAD
wait: true
state: present
meta:
app: my-cool-app
register: clb
- name: Network create request
rax_network:
credentials: ~/.raxpub
label: my-net
cidr: 192.168.3.0/24
state: present
region: IAD
register: network
- name: Server create request
rax:
credentials: ~/.raxpub
name: web%04d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
networks:
- public
- private
- my-net
region: IAD
state: present
count: 5
exact_count: true
group: web
wait: true
register: rax
- name: Add servers to web host group
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
groups: web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Add servers to Load balancer
rax_clb_nodes:
credentials: ~/.raxpub
load_balancer_id: "{{ clb.balancer.id }}"
address: "{{ item.rax_networks.private|first }}"
port: 80
condition: enabled
type: primary
wait: true
region: IAD
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Configure servers
hosts: web
handlers:
- name: restart nginx
service: name=nginx state=restarted
tasks:
- name: Install nginx
apt: pkg=nginx state=latest update_cache=yes cache_valid_time=86400
notify:
- restart nginx
- name: Ensure nginx starts on boot
service: name=nginx state=started enabled=yes
- name: Create custom index.html
copy: content="{{ inventory_hostname }}" dest=/usr/share/nginx/www/index.html
owner=root group=root mode=0644
.. _rackconnect_and_manged_cloud:
RackConnect and Managed Cloud
+++++++++++++++++++++++++++++
When using RackConnect version 2 or Rackspace Managed Cloud there are Rackspace automation tasks that are executed on the servers you create after they are successfully built. If your automation executes before the RackConnect or Managed Cloud automation, you can cause failures and unusable servers.
These examples show creating servers, and ensuring that the Rackspace automation has completed before Ansible continues onwards.
For simplicity, these examples are joined, however both are only needed when using RackConnect. When only using Managed Cloud, the RackConnect portion can be ignored.
The RackConnect portions only apply to RackConnect version 2.
.. _using_a_control_machine:
Using a Control Machine
***********************
.. code-block:: yaml
- name: Create an exact count of servers
hosts: localhost
gather_facts: False
tasks:
- name: Server build requests
rax:
credentials: ~/.raxpub
name: web%03d.example.org
flavor: performance1-1
image: ubuntu-1204-lts-precise-pangolin
disk_config: manual
region: DFW
state: present
count: 1
exact_count: true
group: web
wait: true
register: rax
- name: Add servers to in memory groups
add_host:
hostname: "{{ item.name }}"
ansible_host: "{{ item.rax_accessipv4 }}"
ansible_password: "{{ item.rax_adminpass }}"
ansible_user: root
rax_id: "{{ item.rax_id }}"
groups: web,new_web
loop: "{{ rax.success }}"
when: rax.action == 'create'
- name: Wait for rackconnect and managed cloud automation to complete
hosts: new_web
gather_facts: false
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Wait for rackconnnect automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rackconnect_automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
rax_facts:
credentials: ~/.raxpub
id: "{{ rax_id }}"
region: DFW
register: rax_facts
until: rax_facts.ansible_facts['rax_metadata']['rax_service_level_automation']|default('') == 'Complete'
retries: 30
delay: 10
- name: Update new_web hosts with IP that RackConnect assigns
hosts: new_web
gather_facts: false
tasks:
- name: Get facts about servers
rax_facts:
name: "{{ inventory_hostname }}"
region: DFW
delegate_to: localhost
- name: Map some facts
set_fact:
ansible_host: "{{ rax_accessipv4 }}"
- name: Base Configure Servers
hosts: web
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull:
Using Ansible Pull
******************
.. code-block:: yaml
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: ensure we run all tasks from localhost
delegate_to: localhost
block:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Get region
command: xenstore-read vm-data/provider_data/region
register: rax_region
when: bootstrap.stat.exists != True
- name: Wait for rackconnect automation to complete
uri:
url: "https://{{ rax_region.stdout|trim }}.api.rackconnect.rackspace.com/v1/automation_status?format=json"
return_content: true
register: automation_status
when: bootstrap.stat.exists != True
until: automation_status['automation_status']|default('') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
wait_for:
path: /tmp/rs_managed_cloud_automation_complete
delay: 10
when: bootstrap.stat.exists != True
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _using_ansible_pull_with_xenstore:
Using Ansible Pull with XenStore
********************************
.. code-block:: yaml
---
- name: Ensure Rackconnect and Managed Cloud Automation is complete
hosts: all
tasks:
- name: Check for completed bootstrap
stat:
path: /etc/bootstrap_complete
register: bootstrap
- name: Wait for rackconnect_automation_status xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rackconnect_automation_status
register: rcas_exists
when: bootstrap.stat.exists != True
failed_when: rcas_exists.rc|int > 1
until: rcas_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for rackconnect automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rcas
when: bootstrap.stat.exists != True
until: rcas.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Wait for rax_service_level_automation xenstore key to exist
command: xenstore-exists vm-data/user-metadata/rax_service_level_automation
register: rsla_exists
when: bootstrap.stat.exists != True
failed_when: rsla_exists.rc|int > 1
until: rsla_exists.rc|int == 0
retries: 30
delay: 10
- name: Wait for managed cloud automation to complete
command: xenstore-read vm-data/user-metadata/rackconnect_automation_status
register: rsla
when: bootstrap.stat.exists != True
until: rsla.stdout|replace('"', '') == 'DEPLOYED'
retries: 30
delay: 10
- name: Set bootstrap completed
file:
path: /etc/bootstrap_complete
state: touch
owner: root
group: root
mode: 0400
- name: Base Configure Servers
hosts: all
roles:
- role: users
- role: openssh
opensshd_PermitRootLogin: "no"
- role: ntp
.. _advanced_usage:
Advanced Usage
``````````````
.. _awx_autoscale:
Autoscaling with AWX or Red Hat Ansible Automation Platform
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
The GUI component of :ref:`Red Hat Ansible Automation Platform <ansible_tower>` also contains a very nice feature for auto-scaling use cases. In this mode, a simple curl script can call
a defined URL and the server will "dial out" to the requester and configure an instance that is spinning up. This can be a great way
to reconfigure ephemeral nodes. See `the documentation on provisioning callbacks <https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html#provisioning-callbacks>`_ for more details.
A benefit of using the callback approach over pull mode is that job results are still centrally recorded
and less information has to be shared with remote hosts.
.. _pending_information:
Orchestration in the Rackspace Cloud
++++++++++++++++++++++++++++++++++++
Ansible is a powerful orchestration tool, and rax modules allow you the opportunity to orchestrate complex tasks, deployments, and configurations. The key here is to automate provisioning of infrastructure, like any other piece of software in an environment. Complex deployments might have previously required manual manipulation of load balancers, or manual provisioning of servers. Utilizing the rax modules included with Ansible, one can make the deployment of additional nodes contingent on the current number of running nodes, or the configuration of a clustered application dependent on the number of nodes with common metadata. One could automate the following scenarios, for example:
* Servers that are removed from a Cloud Load Balancer one-by-one, updated, verified, and returned to the load balancer pool
* Expansion of an already-online environment, where nodes are provisioned, bootstrapped, configured, and software installed
* A procedure where app log files are uploaded to a central location, like Cloud Files, before a node is decommissioned
* Servers and load balancers that have DNS records created and destroyed on creation and decommissioning, respectively
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 79,002 |
Docs: Replace Latin terms with English in the scenario_guides directory
|
### Summary
Our [style guide](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#avoid-using-latin-phrases) notes that we should not use Latin terms (e.g, via, etc) in our documentation. This issue specifically asks to replace these with the noted English equivalents from that style guide table for all occurrences in the docs/docsite/rst/scenario_guides/ directory .
Use `grep -R -e 'etc\.' -e 'i\.e ' -e 'e\.g\. ' -e 'via ' -e 'vs\(\.\)\? ' -e versus ` in the docs/docsite/rst/scenario_guides/ directory to find these.
List of all effected files are in a follow-on comment.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/scenario_guides/index.rst
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/79002
|
https://github.com/ansible/ansible/pull/79008
|
4d3c12ae9ead67aee1328dacded55d8cf8cad796
|
367cdae3b279a5281a56808827af27c8883a4ad4
| 2022-10-03T20:07:19Z |
python
| 2022-10-04T09:35:45Z |
docs/docsite/rst/scenario_guides/guide_scaleway.rst
|
.. _guide_scaleway:
**************
Scaleway Guide
**************
.. _scaleway_introduction:
Introduction
============
`Scaleway <https://scaleway.com>`_ is a cloud provider supported by Ansible, version 2.6 or higher via a dynamic inventory plugin and modules.
Those modules are:
- :ref:`scaleway_sshkey_module`: adds a public SSH key from a file or value to the Packet infrastructure. Every subsequently-created device will have this public key installed in .ssh/authorized_keys.
- :ref:`scaleway_compute_module`: manages servers on Scaleway. You can use this module to create, restart and delete servers.
- :ref:`scaleway_volume_module`: manages volumes on Scaleway.
.. note::
This guide assumes you are familiar with Ansible and how it works.
If you're not, have a look at :ref:`ansible_documentation` before getting started.
.. _scaleway_requirements:
Requirements
============
The Scaleway modules and inventory script connect to the Scaleway API using `Scaleway REST API <https://developer.scaleway.com>`_.
To use the modules and inventory script you'll need a Scaleway API token.
You can generate an API token via the Scaleway console `here <https://cloud.scaleway.com/#/credentials>`__.
The simplest way to authenticate yourself is to set the Scaleway API token in an environment variable:
.. code-block:: bash
$ export SCW_TOKEN=00000000-1111-2222-3333-444444444444
If you're not comfortable exporting your API token, you can pass it as a parameter to the modules using the ``api_token`` argument.
If you want to use a new SSH key pair in this tutorial, you can generate it to ``./id_rsa`` and ``./id_rsa.pub`` as:
.. code-block:: bash
$ ssh-keygen -t rsa -f ./id_rsa
If you want to use an existing key pair, just copy the private and public key over to the playbook directory.
.. _scaleway_add_sshkey:
How to add an SSH key?
======================
Connection to Scaleway Compute nodes use Secure Shell.
SSH keys are stored at the account level, which means that you can re-use the same SSH key in multiple nodes.
The first step to configure Scaleway compute resources is to have at least one SSH key configured.
:ref:`scaleway_sshkey_module` is a module that manages SSH keys on your Scaleway account.
You can add an SSH key to your account by including the following task in a playbook:
.. code-block:: yaml
- name: "Add SSH key"
scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAA..."
state: "present"
The ``ssh_pub_key`` parameter contains your ssh public key as a string. Here is an example inside a playbook:
.. code-block:: yaml
- name: Test SSH key lifecycle on a Scaleway account
hosts: localhost
gather_facts: false
environment:
SCW_API_KEY: ""
tasks:
- scaleway_sshkey:
ssh_pub_key: "ssh-rsa AAAAB...424242 [email protected]"
state: present
register: result
- assert:
that:
- result is success and result is changed
.. _scaleway_create_instance:
How to create a compute instance?
=================================
Now that we have an SSH key configured, the next step is to spin up a server!
:ref:`scaleway_compute_module` is a module that can create, update and delete Scaleway compute instances:
.. code-block:: yaml
- name: Create a server
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
Here are the parameter details for the example shown above:
- ``name`` is the name of the instance (the one that will show up in your web console).
- ``image`` is the UUID of the system image you would like to use.
A list of all images is available for each availability zone.
- ``organization`` represents the organization that your account is attached to.
- ``region`` represents the Availability Zone which your instance is in (for this example, par1 and ams1).
- ``commercial_type`` represents the name of the commercial offers.
You can check out the Scaleway pricing page to find which instance is right for you.
Take a look at this short playbook to see a working example using ``scaleway_compute``:
.. code-block:: yaml
- name: Test compute instance lifecycle on a Scaleway account
hosts: localhost
gather_facts: false
environment:
SCW_API_KEY: ""
tasks:
- name: Create a server
register: server_creation_task
scaleway_compute:
name: foobar
state: present
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
- debug: var=server_creation_task
- assert:
that:
- server_creation_task is success
- server_creation_task is changed
- name: Run it
scaleway_compute:
name: foobar
state: running
image: 00000000-1111-2222-3333-444444444444
organization: 00000000-1111-2222-3333-444444444444
region: ams1
commercial_type: START1-S
wait: true
tags:
- web_server
register: server_run_task
- debug: var=server_run_task
- assert:
that:
- server_run_task is success
- server_run_task is changed
.. _scaleway_dynamic_inventory_tutorial:
Dynamic Inventory Script
========================
Ansible ships with :ref:`scaleway_inventory`.
You can now get a complete inventory of your Scaleway resources through this plugin and filter it on
different parameters (``regions`` and ``tags`` are currently supported).
Let's create an example!
Suppose that we want to get all hosts that got the tag web_server.
Create a file named ``scaleway_inventory.yml`` with the following content:
.. code-block:: yaml
plugin: scaleway
regions:
- ams1
- par1
tags:
- web_server
This inventory means that we want all hosts that got the tag ``web_server`` on the zones ``ams1`` and ``par1``.
Once you have configured this file, you can get the information using the following command:
.. code-block:: bash
$ ansible-inventory --list -i scaleway_inventory.yml
The output will be:
.. code-block:: yaml
{
"_meta": {
"hostvars": {
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d": {
"ansible_verbosity": 6,
"arch": "x86_64",
"commercial_type": "START1-S",
"hostname": "foobar",
"ipv4": "192.0.2.1",
"organization": "00000000-1111-2222-3333-444444444444",
"state": "running",
"tags": [
"web_server"
]
}
}
},
"all": {
"children": [
"ams1",
"par1",
"ungrouped",
"web_server"
]
},
"ams1": {},
"par1": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
},
"ungrouped": {},
"web_server": {
"hosts": [
"dd8e3ae9-0c7c-459e-bc7b-aba8bfa1bb8d"
]
}
}
As you can see, we get different groups of hosts.
``par1`` and ``ams1`` are groups based on location.
``web_server`` is a group based on a tag.
In case a filter parameter is not defined, the plugin supposes all values possible are wanted.
This means that for each tag that exists on your Scaleway compute nodes, a group based on each tag will be created.
Scaleway S3 object storage
==========================
`Object Storage <https://www.scaleway.com/object-storage>`_ allows you to store any kind of objects (documents, images, videos, and so on).
As the Scaleway API is S3 compatible, Ansible supports it natively through the modules: :ref:`s3_bucket_module`, :ref:`aws_s3_module`.
You can find many examples in the `scaleway_s3 integration tests <https://github.com/ansible/ansible-legacy-tests/tree/devel/test/legacy/roles/scaleway_s3>`_.
.. code-block:: yaml+jinja
- hosts: myserver
vars:
scaleway_region: nl-ams
s3_url: https://s3.nl-ams.scw.cloud
environment:
# AWS_ACCESS_KEY matches your scaleway organization id available at https://cloud.scaleway.com/#/account
AWS_ACCESS_KEY: 00000000-1111-2222-3333-444444444444
# AWS_SECRET_KEY matches a secret token that you can retrieve at https://cloud.scaleway.com/#/credentials
AWS_SECRET_KEY: aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee
module_defaults:
group/aws:
s3_url: '{{ s3_url }}'
region: '{{ scaleway_region }}'
tasks:
# use a fact instead of a variable, otherwise template is evaluate each time variable is used
- set_fact:
bucket_name: "{{ 99999999 | random | to_uuid }}"
# "requester_pays:" is mandatory because Scaleway doesn't implement related API
# another way is to use aws_s3 and "mode: create" !
- s3_bucket:
name: '{{ bucket_name }}'
requester_pays:
- name: Another way to create the bucket
aws_s3:
bucket: '{{ bucket_name }}'
mode: create
encrypt: false
register: bucket_creation_check
- name: add something in the bucket
aws_s3:
mode: put
bucket: '{{ bucket_name }}'
src: /tmp/test.txt # needs to be created before
object: test.txt
encrypt: false # server side encryption must be disabled
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,912 |
Change to filter/test plugin loading / templating breaks certain playbook constructs
|
### Summary
When referencing to the test `==`, ansible-core crashes with a very unhelpful error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: line 0
```
Passing `-vvv` gives a better hint: `KeyError: 'invalid plugin name: ansible.builtin.=='`. (See below for the full stacktrace.)
git bisect exposed 4260b71cc77b7a44e061668d0d408d847f550156 as the culprit.
### Issue Type
Bug Report
### Component Name
filter/test plugin loading
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
```
ansible -vvv localhost -m debug -a 'msg={{ [1] | selectattr("failed", "==", true) }}'
```
### Expected Results
Some error like
```
localhost | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'int object' has no attribute 'failed'. 'int object' has no attribute 'failed'"
}
```
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/template/__init__.py", line 438, in __getitem__
plugin = self._pluginloader.get(key)
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 830, in get
return self.get_with_context(name, *args, **kwargs).object
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 1131, in get_with_context
raise KeyError('invalid plugin name: {0}'.format(key))
KeyError: 'invalid plugin name: ansible.builtin.=='
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/executor/task_executor.py", line 525, in _execute
self._task.post_validate(templar=templar)
File "/path/to/ansible/lib/ansible/playbook/task.py", line 291, in post_validate
super(Task, self).post_validate(templar)
File "/path/to/ansible/lib/ansible/playbook/base.py", line 525, in post_validate
value = templar.template(getattr(self, name))
File "/path/to/ansible/lib/ansible/template/__init__.py", line 755, in template
d[k] = self.template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 729, in template
result = self.do_template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 992, in do_template
res = self.environment.concat(rf)
File "/path/to/ansible/lib/ansible/template/native_helpers.py", line 44, in ansible_eval_concat
head = list(islice(nodes, 2))
File "<template>", line 17, in root
File "/path/to/ansible/lib/ansible/template/__init__.py", line 264, in wrapper
return list(ret)
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1765, in select_or_reject
if func(item):
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1750, in <lambda>
return lambda item: modfunc(func(transfunc(item)))
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1745, in func
return context.environment.call_test(name, item, args, kwargs)
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 589, in call_test
return self._filter_test_common(
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 510, in _filter_test_common
func = env_map.get(name) # type: ignore
File "/usr/lib/python3.10/_collections_abc.py", line 819, in get
return self[key]
File "/path/to/ansible/lib/ansible/template/__init__.py", line 440, in __getitem__
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(e)), 0)
jinja2.exceptions.TemplateSyntaxError: Could not load "==": 'invalid plugin name: ansible.builtin.=='
line 0
localhost | FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78912
|
https://github.com/ansible/ansible/pull/78913
|
367cdae3b279a5281a56808827af27c8883a4ad4
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
| 2022-09-28T18:53:10Z |
python
| 2022-10-04T13:44:00Z |
changelogs/fragments/78913-template-missing-filter-test.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,912 |
Change to filter/test plugin loading / templating breaks certain playbook constructs
|
### Summary
When referencing to the test `==`, ansible-core crashes with a very unhelpful error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: line 0
```
Passing `-vvv` gives a better hint: `KeyError: 'invalid plugin name: ansible.builtin.=='`. (See below for the full stacktrace.)
git bisect exposed 4260b71cc77b7a44e061668d0d408d847f550156 as the culprit.
### Issue Type
Bug Report
### Component Name
filter/test plugin loading
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
```
ansible -vvv localhost -m debug -a 'msg={{ [1] | selectattr("failed", "==", true) }}'
```
### Expected Results
Some error like
```
localhost | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'int object' has no attribute 'failed'. 'int object' has no attribute 'failed'"
}
```
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/template/__init__.py", line 438, in __getitem__
plugin = self._pluginloader.get(key)
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 830, in get
return self.get_with_context(name, *args, **kwargs).object
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 1131, in get_with_context
raise KeyError('invalid plugin name: {0}'.format(key))
KeyError: 'invalid plugin name: ansible.builtin.=='
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/executor/task_executor.py", line 525, in _execute
self._task.post_validate(templar=templar)
File "/path/to/ansible/lib/ansible/playbook/task.py", line 291, in post_validate
super(Task, self).post_validate(templar)
File "/path/to/ansible/lib/ansible/playbook/base.py", line 525, in post_validate
value = templar.template(getattr(self, name))
File "/path/to/ansible/lib/ansible/template/__init__.py", line 755, in template
d[k] = self.template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 729, in template
result = self.do_template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 992, in do_template
res = self.environment.concat(rf)
File "/path/to/ansible/lib/ansible/template/native_helpers.py", line 44, in ansible_eval_concat
head = list(islice(nodes, 2))
File "<template>", line 17, in root
File "/path/to/ansible/lib/ansible/template/__init__.py", line 264, in wrapper
return list(ret)
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1765, in select_or_reject
if func(item):
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1750, in <lambda>
return lambda item: modfunc(func(transfunc(item)))
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1745, in func
return context.environment.call_test(name, item, args, kwargs)
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 589, in call_test
return self._filter_test_common(
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 510, in _filter_test_common
func = env_map.get(name) # type: ignore
File "/usr/lib/python3.10/_collections_abc.py", line 819, in get
return self[key]
File "/path/to/ansible/lib/ansible/template/__init__.py", line 440, in __getitem__
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(e)), 0)
jinja2.exceptions.TemplateSyntaxError: Could not load "==": 'invalid plugin name: ansible.builtin.=='
line 0
localhost | FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78912
|
https://github.com/ansible/ansible/pull/78913
|
367cdae3b279a5281a56808827af27c8883a4ad4
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
| 2022-09-28T18:53:10Z |
python
| 2022-10-04T13:44:00Z |
lib/ansible/template/__init__.py
|
# (c) 2012-2014, Michael DeHaan <[email protected]>
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
# Make coding more python3-ish
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import ast
import datetime
import os
import pwd
import re
import time
from collections.abc import Iterator, Sequence, Mapping, MappingView, MutableMapping
from contextlib import contextmanager
from numbers import Number
from traceback import format_exc
from jinja2.exceptions import TemplateSyntaxError, UndefinedError
from jinja2.loaders import FileSystemLoader
from jinja2.nativetypes import NativeEnvironment
from jinja2.runtime import Context, StrictUndefined
from ansible import constants as C
from ansible.errors import (
AnsibleAssertionError,
AnsibleError,
AnsibleFilterError,
AnsibleLookupError,
AnsibleOptionsError,
AnsibleUndefinedVariable,
)
from ansible.module_utils.six import string_types, text_type
from ansible.module_utils._text import to_native, to_text, to_bytes
from ansible.module_utils.common.collections import is_sequence
from ansible.plugins.loader import filter_loader, lookup_loader, test_loader
from ansible.template.native_helpers import ansible_native_concat, ansible_eval_concat, ansible_concat
from ansible.template.template import AnsibleJ2Template
from ansible.template.vars import AnsibleJ2Vars
from ansible.utils.display import Display
from ansible.utils.listify import listify_lookup_plugin_terms
from ansible.utils.native_jinja import NativeJinjaText
from ansible.utils.unsafe_proxy import wrap_var
display = Display()
__all__ = ['Templar', 'generate_ansible_template_vars']
# Primitive Types which we don't want Jinja to convert to strings.
NON_TEMPLATED_TYPES = (bool, Number)
JINJA2_OVERRIDE = '#jinja2:'
JINJA2_BEGIN_TOKENS = frozenset(('variable_begin', 'block_begin', 'comment_begin', 'raw_begin'))
JINJA2_END_TOKENS = frozenset(('variable_end', 'block_end', 'comment_end', 'raw_end'))
RANGE_TYPE = type(range(0))
def generate_ansible_template_vars(path, fullpath=None, dest_path=None):
if fullpath is None:
b_path = to_bytes(path)
else:
b_path = to_bytes(fullpath)
try:
template_uid = pwd.getpwuid(os.stat(b_path).st_uid).pw_name
except (KeyError, TypeError):
template_uid = os.stat(b_path).st_uid
temp_vars = {
'template_host': to_text(os.uname()[1]),
'template_path': path,
'template_mtime': datetime.datetime.fromtimestamp(os.path.getmtime(b_path)),
'template_uid': to_text(template_uid),
'template_run_date': datetime.datetime.now(),
'template_destpath': to_native(dest_path) if dest_path else None,
}
if fullpath is None:
temp_vars['template_fullpath'] = os.path.abspath(path)
else:
temp_vars['template_fullpath'] = fullpath
managed_default = C.DEFAULT_MANAGED_STR
managed_str = managed_default.format(
host=temp_vars['template_host'],
uid=temp_vars['template_uid'],
file=temp_vars['template_path'],
)
temp_vars['ansible_managed'] = to_text(time.strftime(to_native(managed_str), time.localtime(os.path.getmtime(b_path))))
return temp_vars
def _escape_backslashes(data, jinja_env):
"""Double backslashes within jinja2 expressions
A user may enter something like this in a playbook::
debug:
msg: "Test Case 1\\3; {{ test1_name | regex_replace('^(.*)_name$', '\\1')}}"
The string inside of the {{ gets interpreted multiple times First by yaml.
Then by python. And finally by jinja2 as part of it's variable. Because
it is processed by both python and jinja2, the backslash escaped
characters get unescaped twice. This means that we'd normally have to use
four backslashes to escape that. This is painful for playbook authors as
they have to remember different rules for inside vs outside of a jinja2
expression (The backslashes outside of the "{{ }}" only get processed by
yaml and python. So they only need to be escaped once). The following
code fixes this by automatically performing the extra quoting of
backslashes inside of a jinja2 expression.
"""
if '\\' in data and '{{' in data:
new_data = []
d2 = jinja_env.preprocess(data)
in_var = False
for token in jinja_env.lex(d2):
if token[1] == 'variable_begin':
in_var = True
new_data.append(token[2])
elif token[1] == 'variable_end':
in_var = False
new_data.append(token[2])
elif in_var and token[1] == 'string':
# Double backslashes only if we're inside of a jinja2 variable
new_data.append(token[2].replace('\\', '\\\\'))
else:
new_data.append(token[2])
data = ''.join(new_data)
return data
def is_possibly_template(data, jinja_env):
"""Determines if a string looks like a template, by seeing if it
contains a jinja2 start delimiter. Does not guarantee that the string
is actually a template.
This is different than ``is_template`` which is more strict.
This method may return ``True`` on a string that is not templatable.
Useful when guarding passing a string for templating, but when
you want to allow the templating engine to make the final
assessment which may result in ``TemplateSyntaxError``.
"""
if isinstance(data, string_types):
for marker in (jinja_env.block_start_string, jinja_env.variable_start_string, jinja_env.comment_start_string):
if marker in data:
return True
return False
def is_template(data, jinja_env):
"""This function attempts to quickly detect whether a value is a jinja2
template. To do so, we look for the first 2 matching jinja2 tokens for
start and end delimiters.
"""
found = None
start = True
comment = False
d2 = jinja_env.preprocess(data)
# Quick check to see if this is remotely like a template before doing
# more expensive investigation.
if not is_possibly_template(d2, jinja_env):
return False
# This wraps a lot of code, but this is due to lex returning a generator
# so we may get an exception at any part of the loop
try:
for token in jinja_env.lex(d2):
if token[1] in JINJA2_BEGIN_TOKENS:
if start and token[1] == 'comment_begin':
# Comments can wrap other token types
comment = True
start = False
# Example: variable_end -> variable
found = token[1].split('_')[0]
elif token[1] in JINJA2_END_TOKENS:
if token[1].split('_')[0] == found:
return True
elif comment:
continue
return False
except TemplateSyntaxError:
return False
return False
def _count_newlines_from_end(in_str):
'''
Counts the number of newlines at the end of a string. This is used during
the jinja2 templating to ensure the count matches the input, since some newlines
may be thrown away during the templating.
'''
try:
i = len(in_str)
j = i - 1
while in_str[j] == '\n':
j -= 1
return i - 1 - j
except IndexError:
# Uncommon cases: zero length string and string containing only newlines
return i
def recursive_check_defined(item):
from jinja2.runtime import Undefined
if isinstance(item, MutableMapping):
for key in item:
recursive_check_defined(item[key])
elif isinstance(item, list):
for i in item:
recursive_check_defined(i)
else:
if isinstance(item, Undefined):
raise AnsibleFilterError("{0} is undefined".format(item))
def _is_rolled(value):
"""Helper method to determine if something is an unrolled generator,
iterator, or similar object
"""
return (
isinstance(value, Iterator) or
isinstance(value, MappingView) or
isinstance(value, RANGE_TYPE)
)
def _unroll_iterator(func):
"""Wrapper function, that intercepts the result of a templating
and auto unrolls a generator, so that users are not required to
explicitly use ``|list`` to unroll.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
if _is_rolled(ret):
return list(ret)
return ret
return _update_wrapper(wrapper, func)
def _update_wrapper(wrapper, func):
# This code is duplicated from ``functools.update_wrapper`` from Py3.7.
# ``functools.update_wrapper`` was failing when the func was ``functools.partial``
for attr in ('__module__', '__name__', '__qualname__', '__doc__', '__annotations__'):
try:
value = getattr(func, attr)
except AttributeError:
pass
else:
setattr(wrapper, attr, value)
for attr in ('__dict__',):
getattr(wrapper, attr).update(getattr(func, attr, {}))
wrapper.__wrapped__ = func
return wrapper
def _wrap_native_text(func):
"""Wrapper function, that intercepts the result of a filter
and wraps it into NativeJinjaText which is then used
in ``ansible_native_concat`` to indicate that it is a text
which should not be passed into ``literal_eval``.
"""
def wrapper(*args, **kwargs):
ret = func(*args, **kwargs)
return NativeJinjaText(ret)
return _update_wrapper(wrapper, func)
class AnsibleUndefined(StrictUndefined):
'''
A custom Undefined class, which returns further Undefined objects on access,
rather than throwing an exception.
'''
def __getattr__(self, name):
if name == '__UNSAFE__':
# AnsibleUndefined should never be assumed to be unsafe
# This prevents ``hasattr(val, '__UNSAFE__')`` from evaluating to ``True``
raise AttributeError(name)
# Return original Undefined object to preserve the first failure context
return self
def __getitem__(self, key):
# Return original Undefined object to preserve the first failure context
return self
def __repr__(self):
return 'AnsibleUndefined(hint={0!r}, obj={1!r}, name={2!r})'.format(
self._undefined_hint,
self._undefined_obj,
self._undefined_name
)
def __contains__(self, item):
# Return original Undefined object to preserve the first failure context
return self
class AnsibleContext(Context):
'''
A custom context, which intercepts resolve_or_missing() calls and sets a flag
internally if any variable lookup returns an AnsibleUnsafe value. This
flag is checked post-templating, and (when set) will result in the
final templated result being wrapped in AnsibleUnsafe.
'''
def __init__(self, *args, **kwargs):
super(AnsibleContext, self).__init__(*args, **kwargs)
self.unsafe = False
def _is_unsafe(self, val):
'''
Our helper function, which will also recursively check dict and
list entries due to the fact that they may be repr'd and contain
a key or value which contains jinja2 syntax and would otherwise
lose the AnsibleUnsafe value.
'''
if isinstance(val, dict):
for key in val.keys():
if self._is_unsafe(val[key]):
return True
elif isinstance(val, list):
for item in val:
if self._is_unsafe(item):
return True
elif getattr(val, '__UNSAFE__', False) is True:
return True
return False
def _update_unsafe(self, val):
if val is not None and not self.unsafe and self._is_unsafe(val):
self.unsafe = True
def resolve_or_missing(self, key):
val = super(AnsibleContext, self).resolve_or_missing(key)
self._update_unsafe(val)
return val
def get_all(self):
"""Return the complete context as a dict including the exported
variables. For optimizations reasons this might not return an
actual copy so be careful with using it.
This is to prevent from running ``AnsibleJ2Vars`` through dict():
``dict(self.parent, **self.vars)``
In Ansible this means that ALL variables would be templated in the
process of re-creating the parent because ``AnsibleJ2Vars`` templates
each variable in its ``__getitem__`` method. Instead we re-create the
parent via ``AnsibleJ2Vars.add_locals`` that creates a new
``AnsibleJ2Vars`` copy without templating each variable.
This will prevent unnecessarily templating unused variables in cases
like setting a local variable and passing it to {% include %}
in a template.
Also see ``AnsibleJ2Template``and
https://github.com/pallets/jinja/commit/d67f0fd4cc2a4af08f51f4466150d49da7798729
"""
if not self.vars:
return self.parent
if not self.parent:
return self.vars
if isinstance(self.parent, AnsibleJ2Vars):
return self.parent.add_locals(self.vars)
else:
# can this happen in Ansible?
return dict(self.parent, **self.vars)
class JinjaPluginIntercept(MutableMapping):
''' Simulated dict class that loads Jinja2Plugins at request
otherwise all plugins would need to be loaded a priori.
NOTE: plugin_loader still loads all 'builtin/legacy' at
start so only collection plugins are really at request.
'''
def __init__(self, delegatee, pluginloader, *args, **kwargs):
super(JinjaPluginIntercept, self).__init__(*args, **kwargs)
self._pluginloader = pluginloader
# cache of resolved plugins
self._delegatee = delegatee
# track loaded plugins here as cache above includes 'jinja2' filters but ours should override
self._loaded_builtins = set()
def __getitem__(self, key):
if not isinstance(key, string_types):
raise ValueError('key must be a string, got %s instead' % type(key))
if key not in self._loaded_builtins:
plugin = None
try:
plugin = self._pluginloader.get(key)
except (AnsibleError, KeyError) as e:
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(e)), 0)
except Exception as e:
display.vvvv('Unexpected plugin load (%s) exception: %s' % (key, to_native(e)))
raise e
# if a plugin was found/loaded
if plugin:
# set in filter cache and avoid expensive plugin load
self._delegatee[key] = plugin.j2_function
self._loaded_builtins.add(key)
# let it trigger keyerror if we could not find ours or jinja2 one
func = self._delegatee[key]
# if i do have func and it is a filter, it nees wrapping
if self._pluginloader.type == 'filter':
# filter need wrapping
if key in C.STRING_TYPE_FILTERS:
# avoid litera_eval when you WANT strings
func = _wrap_native_text(func)
else:
# conditionally unroll iterators/generators to avoid having to use `|list` after every filter
func = _unroll_iterator(func)
return func
def __setitem__(self, key, value):
return self._delegatee.__setitem__(key, value)
def __delitem__(self, key):
raise NotImplementedError()
def __iter__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return iter(self._delegatee)
def __len__(self):
# not strictly accurate since we're not counting dynamically-loaded values
return len(self._delegatee)
def _fail_on_undefined(data):
"""Recursively find an undefined value in a nested data structure
and properly raise the undefined exception.
"""
if isinstance(data, Mapping):
for value in data.values():
_fail_on_undefined(value)
elif is_sequence(data):
for item in data:
_fail_on_undefined(item)
else:
if isinstance(data, StrictUndefined):
# To actually raise the undefined exception we need to
# access the undefined object otherwise the exception would
# be raised on the next access which might not be properly
# handled.
# See https://github.com/ansible/ansible/issues/52158
# and StrictUndefined implementation in upstream Jinja2.
str(data)
return data
@_unroll_iterator
def _ansible_finalize(thing):
"""A custom finalize function for jinja2, which prevents None from being
returned. This avoids a string of ``"None"`` as ``None`` has no
importance in YAML.
The function is decorated with ``_unroll_iterator`` so that users are not
required to explicitly use ``|list`` to unroll a generator. This only
affects the scenario where the final result of templating
is a generator, e.g. ``range``, ``dict.items()`` and so on. Filters
which can produce a generator in the middle of a template are already
wrapped with ``_unroll_generator`` in ``JinjaPluginIntercept``.
"""
return thing if _fail_on_undefined(thing) is not None else ''
class AnsibleEnvironment(NativeEnvironment):
'''
Our custom environment, which simply allows us to override the class-level
values for the Template and Context classes used by jinja2 internally.
'''
context_class = AnsibleContext
template_class = AnsibleJ2Template
concat = staticmethod(ansible_eval_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.filters = JinjaPluginIntercept(self.filters, filter_loader)
self.tests = JinjaPluginIntercept(self.tests, test_loader)
self.trim_blocks = True
self.undefined = AnsibleUndefined
self.finalize = _ansible_finalize
class AnsibleNativeEnvironment(AnsibleEnvironment):
concat = staticmethod(ansible_native_concat)
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.finalize = _unroll_iterator(_fail_on_undefined)
class Templar:
'''
The main class for templating, with the main entry-point of template().
'''
def __init__(self, loader, shared_loader_obj=None, variables=None):
if shared_loader_obj is not None:
display.deprecated(
"The `shared_loader_obj` option to `Templar` is no longer functional, "
"ansible.plugins.loader is used directly instead.",
version='2.16',
)
self._loader = loader
self._available_variables = {} if variables is None else variables
self._fail_on_undefined_errors = C.DEFAULT_UNDEFINED_VAR_BEHAVIOR
environment_class = AnsibleNativeEnvironment if C.DEFAULT_JINJA2_NATIVE else AnsibleEnvironment
self.environment = environment_class(
extensions=self._get_extensions(),
loader=FileSystemLoader(loader.get_basedir() if loader else '.'),
)
self.environment.template_class.environment_class = environment_class
# jinja2 global is inconsistent across versions, this normalizes them
self.environment.globals['dict'] = dict
# Custom globals
self.environment.globals['lookup'] = self._lookup
self.environment.globals['query'] = self.environment.globals['q'] = self._query_lookup
self.environment.globals['now'] = self._now_datetime
self.environment.globals['undef'] = self._make_undefined
# the current rendering context under which the templar class is working
self.cur_context = None
# FIXME this regex should be re-compiled each time variable_start_string and variable_end_string are changed
self.SINGLE_VAR = re.compile(r"^%s\s*(\w*)\s*%s$" % (self.environment.variable_start_string, self.environment.variable_end_string))
self.jinja2_native = C.DEFAULT_JINJA2_NATIVE
def copy_with_new_env(self, environment_class=AnsibleEnvironment, **kwargs):
r"""Creates a new copy of Templar with a new environment.
:kwarg environment_class: Environment class used for creating a new environment.
:kwarg \*\*kwargs: Optional arguments for the new environment that override existing
environment attributes.
:returns: Copy of Templar with updated environment.
"""
# We need to use __new__ to skip __init__, mainly not to create a new
# environment there only to override it below
new_env = object.__new__(environment_class)
new_env.__dict__.update(self.environment.__dict__)
new_templar = object.__new__(Templar)
new_templar.__dict__.update(self.__dict__)
new_templar.environment = new_env
new_templar.jinja2_native = environment_class is AnsibleNativeEnvironment
mapping = {
'available_variables': new_templar,
'searchpath': new_env.loader,
}
for key, value in kwargs.items():
obj = mapping.get(key, new_env)
try:
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
return new_templar
def _get_extensions(self):
'''
Return jinja2 extensions to load.
If some extensions are set via jinja_extensions in ansible.cfg, we try
to load them with the jinja environment.
'''
jinja_exts = []
if C.DEFAULT_JINJA2_EXTENSIONS:
# make sure the configuration directive doesn't contain spaces
# and split extensions in an array
jinja_exts = C.DEFAULT_JINJA2_EXTENSIONS.replace(" ", "").split(',')
return jinja_exts
@property
def available_variables(self):
return self._available_variables
@available_variables.setter
def available_variables(self, variables):
'''
Sets the list of template variables this Templar instance will use
to template things, so we don't have to pass them around between
internal methods. We also clear the template cache here, as the variables
are being changed.
'''
if not isinstance(variables, Mapping):
raise AnsibleAssertionError("the type of 'variables' should be a Mapping but was a %s" % (type(variables)))
self._available_variables = variables
@contextmanager
def set_temporary_context(self, **kwargs):
"""Context manager used to set temporary templating context, without having to worry about resetting
original values afterward
Use a keyword that maps to the attr you are setting. Applies to ``self.environment`` by default, to
set context on another object, it must be in ``mapping``.
"""
mapping = {
'available_variables': self,
'searchpath': self.environment.loader,
}
original = {}
for key, value in kwargs.items():
obj = mapping.get(key, self.environment)
try:
original[key] = getattr(obj, key)
if value is not None:
setattr(obj, key, value)
except AttributeError:
# Ignore invalid attrs
pass
yield
for key in original:
obj = mapping.get(key, self.environment)
setattr(obj, key, original[key])
def template(self, variable, convert_bare=False, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None,
convert_data=True, static_vars=None, cache=None, disable_lookups=False):
'''
Templates (possibly recursively) any given data as input. If convert_bare is
set to True, the given data will be wrapped as a jinja2 variable ('{{foo}}')
before being sent through the template engine.
'''
static_vars = [] if static_vars is None else static_vars
if cache is not None:
display.deprecated("The `cache` option to `Templar.template` is no longer functional, and will be removed in a future release.", version='2.18')
# Don't template unsafe variables, just return them.
if hasattr(variable, '__UNSAFE__'):
return variable
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
if convert_bare:
variable = self._convert_bare_variable(variable)
if isinstance(variable, string_types):
if not self.is_possibly_template(variable):
return variable
# Check to see if the string we are trying to render is just referencing a single
# var. In this case we don't want to accidentally change the type of the variable
# to a string by using the jinja template renderer. We just want to pass it.
only_one = self.SINGLE_VAR.match(variable)
if only_one:
var_name = only_one.group(1)
if var_name in self._available_variables:
resolved_val = self._available_variables[var_name]
if isinstance(resolved_val, NON_TEMPLATED_TYPES):
return resolved_val
elif resolved_val is None:
return C.DEFAULT_NULL_REPRESENTATION
result = self.do_template(
variable,
preserve_trailing_newlines=preserve_trailing_newlines,
escape_backslashes=escape_backslashes,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
convert_data=convert_data,
)
return result
elif is_sequence(variable):
return [self.template(
v,
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
) for v in variable]
elif isinstance(variable, Mapping):
d = {}
# we don't use iteritems() here to avoid problems if the underlying dict
# changes sizes due to the templating, which can happen with hostvars
for k in variable.keys():
if k not in static_vars:
d[k] = self.template(
variable[k],
preserve_trailing_newlines=preserve_trailing_newlines,
fail_on_undefined=fail_on_undefined,
overrides=overrides,
disable_lookups=disable_lookups,
)
else:
d[k] = variable[k]
return d
else:
return variable
def is_template(self, data):
'''lets us know if data has a template'''
if isinstance(data, string_types):
return is_template(data, self.environment)
elif isinstance(data, (list, tuple)):
for v in data:
if self.is_template(v):
return True
elif isinstance(data, dict):
for k in data:
if self.is_template(k) or self.is_template(data[k]):
return True
return False
templatable = is_template
def is_possibly_template(self, data):
return is_possibly_template(data, self.environment)
def _convert_bare_variable(self, variable):
'''
Wraps a bare string, which may have an attribute portion (ie. foo.bar)
in jinja2 variable braces so that it is evaluated properly.
'''
if isinstance(variable, string_types):
contains_filters = "|" in variable
first_part = variable.split("|")[0].split(".")[0].split("[")[0]
if (contains_filters or first_part in self._available_variables) and self.environment.variable_start_string not in variable:
return "%s%s%s" % (self.environment.variable_start_string, variable, self.environment.variable_end_string)
# the variable didn't meet the conditions to be converted,
# so just return it as-is
return variable
def _fail_lookup(self, name, *args, **kwargs):
raise AnsibleError("The lookup `%s` was found, however lookups were disabled from templating" % name)
def _now_datetime(self, utc=False, fmt=None):
'''jinja2 global function to return current datetime, potentially formatted via strftime'''
if utc:
now = datetime.datetime.utcnow()
else:
now = datetime.datetime.now()
if fmt:
return now.strftime(fmt)
return now
def _query_lookup(self, name, *args, **kwargs):
''' wrapper for lookup, force wantlist true'''
kwargs['wantlist'] = True
return self._lookup(name, *args, **kwargs)
def _lookup(self, name, *args, **kwargs):
instance = lookup_loader.get(name, loader=self._loader, templar=self)
if instance is None:
raise AnsibleError("lookup plugin (%s) not found" % name)
wantlist = kwargs.pop('wantlist', False)
allow_unsafe = kwargs.pop('allow_unsafe', C.DEFAULT_ALLOW_UNSAFE_LOOKUPS)
errors = kwargs.pop('errors', 'strict')
loop_terms = listify_lookup_plugin_terms(terms=args, templar=self, fail_on_undefined=True, convert_bare=False)
# safely catch run failures per #5059
try:
ran = instance.run(loop_terms, variables=self._available_variables, **kwargs)
except (AnsibleUndefinedVariable, UndefinedError) as e:
raise AnsibleUndefinedVariable(e)
except AnsibleOptionsError as e:
# invalid options given to lookup, just reraise
raise e
except AnsibleLookupError as e:
# lookup handled error but still decided to bail
msg = 'Lookup failed but the error is being ignored: %s' % to_native(e)
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
raise e
return [] if wantlist else None
except Exception as e:
# errors not handled by lookup
msg = u"An unhandled exception occurred while running the lookup plugin '%s'. Error was a %s, original message: %s" % \
(name, type(e), to_text(e))
if errors == 'warn':
display.warning(msg)
elif errors == 'ignore':
display.display(msg, log_only=True)
else:
display.vvv('exception during Jinja2 execution: {0}'.format(format_exc()))
raise AnsibleError(to_native(msg), orig_exc=e)
return [] if wantlist else None
if not is_sequence(ran):
display.deprecated(
f'The lookup plugin \'{name}\' was expected to return a list, got \'{type(ran)}\' instead. '
f'The lookup plugin \'{name}\' needs to be changed to return a list. '
'This will be an error in Ansible 2.18',
version='2.18'
)
if ran and allow_unsafe is False:
if self.cur_context:
self.cur_context.unsafe = True
if wantlist:
return wrap_var(ran)
try:
if isinstance(ran[0], NativeJinjaText):
ran = wrap_var(NativeJinjaText(",".join(ran)))
else:
ran = wrap_var(",".join(ran))
except TypeError:
# Lookup Plugins should always return lists. Throw an error if that's not
# the case:
if not isinstance(ran, Sequence):
raise AnsibleError("The lookup plugin '%s' did not return a list."
% name)
# The TypeError we can recover from is when the value *inside* of the list
# is not a string
if len(ran) == 1:
ran = wrap_var(ran[0])
else:
ran = wrap_var(ran)
except KeyError:
# Lookup Plugin returned a dict. Return comma-separated string of keys
# for backwards compat.
# FIXME this can be removed when support for non-list return types is removed.
# See https://github.com/ansible/ansible/pull/77789
ran = wrap_var(",".join(ran))
return ran
def _make_undefined(self, hint=None):
from jinja2.runtime import Undefined
if hint is None or isinstance(hint, Undefined) or hint == '':
hint = "Mandatory variable has not been overridden"
return AnsibleUndefined(hint)
def do_template(self, data, preserve_trailing_newlines=True, escape_backslashes=True, fail_on_undefined=None, overrides=None, disable_lookups=False,
convert_data=False):
if self.jinja2_native and not isinstance(data, string_types):
return data
# For preserving the number of input newlines in the output (used
# later in this method)
data_newlines = _count_newlines_from_end(data)
if fail_on_undefined is None:
fail_on_undefined = self._fail_on_undefined_errors
has_template_overrides = data.startswith(JINJA2_OVERRIDE)
try:
# NOTE Creating an overlay that lives only inside do_template means that overrides are not applied
# when templating nested variables in AnsibleJ2Vars where Templar.environment is used, not the overlay.
# This is historic behavior that is kept for backwards compatibility.
if overrides:
myenv = self.environment.overlay(overrides)
elif has_template_overrides:
myenv = self.environment.overlay()
else:
myenv = self.environment
# Get jinja env overrides from template
if has_template_overrides:
eol = data.find('\n')
line = data[len(JINJA2_OVERRIDE):eol]
data = data[eol + 1:]
for pair in line.split(','):
if ':' not in pair:
raise AnsibleError("failed to parse jinja2 override '%s'."
" Did you use something different from colon as key-value separator?" % pair.strip())
(key, val) = pair.split(':', 1)
key = key.strip()
setattr(myenv, key, ast.literal_eval(val.strip()))
if escape_backslashes:
# Allow users to specify backslashes in playbooks as "\\" instead of as "\\\\".
data = _escape_backslashes(data, myenv)
try:
t = myenv.from_string(data)
except TemplateSyntaxError as e:
raise AnsibleError("template error while templating string: %s. String: %s" % (to_native(e), to_native(data)), orig_exc=e)
except Exception as e:
if 'recursion' in to_native(e):
raise AnsibleError("recursive loop detected in template string: %s" % to_native(data), orig_exc=e)
else:
return data
if disable_lookups:
t.globals['query'] = t.globals['q'] = t.globals['lookup'] = self._fail_lookup
jvars = AnsibleJ2Vars(self, t.globals)
# In case this is a recursive call to do_template we need to
# save/restore cur_context to prevent overriding __UNSAFE__.
cached_context = self.cur_context
# In case this is a recursive call and we set different concat
# function up the stack, reset it in case the value of convert_data
# changed in this call
self.environment.concat = self.environment.__class__.concat
# the concat function is set for each Ansible environment,
# however for convert_data=False we need to use the concat
# function that avoids any evaluation and set it temporarily
# on the environment so it is used correctly even when
# the concat function is called internally in Jinja,
# most notably for macro execution
if not self.jinja2_native and not convert_data:
self.environment.concat = ansible_concat
self.cur_context = t.new_context(jvars, shared=True)
rf = t.root_render_func(self.cur_context)
try:
res = self.environment.concat(rf)
unsafe = getattr(self.cur_context, 'unsafe', False)
if unsafe:
res = wrap_var(res)
except TypeError as te:
if 'AnsibleUndefined' in to_native(te):
errmsg = "Unable to look up a name or access an attribute in template string (%s).\n" % to_native(data)
errmsg += "Make sure your variable name does not contain invalid characters like '-': %s" % to_native(te)
raise AnsibleUndefinedVariable(errmsg, orig_exc=te)
else:
display.debug("failing because of a type error, template data is: %s" % to_text(data))
raise AnsibleError("Unexpected templating type error occurred on (%s): %s" % (to_native(data), to_native(te)), orig_exc=te)
finally:
self.cur_context = cached_context
if isinstance(res, string_types) and preserve_trailing_newlines:
# The low level calls above do not preserve the newline
# characters at the end of the input data, so we use the
# calculate the difference in newlines and append them
# to the resulting output for parity
#
# Using Environment's keep_trailing_newline instead would
# result in change in behavior when trailing newlines
# would be kept also for included templates, for example:
# "Hello {% include 'world.txt' %}!" would render as
# "Hello world\n!\n" instead of "Hello world!\n".
res_newlines = _count_newlines_from_end(res)
if data_newlines > res_newlines:
res += self.environment.newline_sequence * (data_newlines - res_newlines)
if unsafe:
res = wrap_var(res)
return res
except (UndefinedError, AnsibleUndefinedVariable) as e:
if fail_on_undefined:
raise AnsibleUndefinedVariable(e, orig_exc=e)
else:
display.debug("Ignoring undefined failure: %s" % to_text(e))
return data
# for backwards compatibility in case anyone is using old private method directly
_do_template = do_template
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,912 |
Change to filter/test plugin loading / templating breaks certain playbook constructs
|
### Summary
When referencing to the test `==`, ansible-core crashes with a very unhelpful error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: line 0
```
Passing `-vvv` gives a better hint: `KeyError: 'invalid plugin name: ansible.builtin.=='`. (See below for the full stacktrace.)
git bisect exposed 4260b71cc77b7a44e061668d0d408d847f550156 as the culprit.
### Issue Type
Bug Report
### Component Name
filter/test plugin loading
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
```
ansible -vvv localhost -m debug -a 'msg={{ [1] | selectattr("failed", "==", true) }}'
```
### Expected Results
Some error like
```
localhost | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'int object' has no attribute 'failed'. 'int object' has no attribute 'failed'"
}
```
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/template/__init__.py", line 438, in __getitem__
plugin = self._pluginloader.get(key)
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 830, in get
return self.get_with_context(name, *args, **kwargs).object
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 1131, in get_with_context
raise KeyError('invalid plugin name: {0}'.format(key))
KeyError: 'invalid plugin name: ansible.builtin.=='
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/executor/task_executor.py", line 525, in _execute
self._task.post_validate(templar=templar)
File "/path/to/ansible/lib/ansible/playbook/task.py", line 291, in post_validate
super(Task, self).post_validate(templar)
File "/path/to/ansible/lib/ansible/playbook/base.py", line 525, in post_validate
value = templar.template(getattr(self, name))
File "/path/to/ansible/lib/ansible/template/__init__.py", line 755, in template
d[k] = self.template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 729, in template
result = self.do_template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 992, in do_template
res = self.environment.concat(rf)
File "/path/to/ansible/lib/ansible/template/native_helpers.py", line 44, in ansible_eval_concat
head = list(islice(nodes, 2))
File "<template>", line 17, in root
File "/path/to/ansible/lib/ansible/template/__init__.py", line 264, in wrapper
return list(ret)
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1765, in select_or_reject
if func(item):
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1750, in <lambda>
return lambda item: modfunc(func(transfunc(item)))
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1745, in func
return context.environment.call_test(name, item, args, kwargs)
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 589, in call_test
return self._filter_test_common(
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 510, in _filter_test_common
func = env_map.get(name) # type: ignore
File "/usr/lib/python3.10/_collections_abc.py", line 819, in get
return self[key]
File "/path/to/ansible/lib/ansible/template/__init__.py", line 440, in __getitem__
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(e)), 0)
jinja2.exceptions.TemplateSyntaxError: Could not load "==": 'invalid plugin name: ansible.builtin.=='
line 0
localhost | FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78912
|
https://github.com/ansible/ansible/pull/78913
|
367cdae3b279a5281a56808827af27c8883a4ad4
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
| 2022-09-28T18:53:10Z |
python
| 2022-10-04T13:44:00Z |
test/integration/targets/templating/tasks/main.yml
|
- command: echo {% raw %}{{ foo }}{% endraw %}
register: result
- assert:
that:
- result.stdout_lines|first == expected
vars:
expected: !unsafe '{{ foo }}'
- name: Assert that templating can convert JSON null, true, and false to Python
assert:
that:
- foo.null is none
- foo.true is true
- foo.false is false
vars:
# Kind of hack to just send a JSON string through jinja, by templating out nothing
foo: '{{ "" }}{"null": null, "true": true, "false": false}'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,912 |
Change to filter/test plugin loading / templating breaks certain playbook constructs
|
### Summary
When referencing to the test `==`, ansible-core crashes with a very unhelpful error:
```
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: line 0
```
Passing `-vvv` gives a better hint: `KeyError: 'invalid plugin name: ansible.builtin.=='`. (See below for the full stacktrace.)
git bisect exposed 4260b71cc77b7a44e061668d0d408d847f550156 as the culprit.
### Issue Type
Bug Report
### Component Name
filter/test plugin loading
### Ansible Version
```console
devel
```
### Configuration
```console
-
```
### OS / Environment
-
### Steps to Reproduce
```
ansible -vvv localhost -m debug -a 'msg={{ [1] | selectattr("failed", "==", true) }}'
```
### Expected Results
Some error like
```
localhost | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'int object' has no attribute 'failed'. 'int object' has no attribute 'failed'"
}
```
### Actual Results
```console
The full traceback is:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/template/__init__.py", line 438, in __getitem__
plugin = self._pluginloader.get(key)
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 830, in get
return self.get_with_context(name, *args, **kwargs).object
File "/path/to/ansible/lib/ansible/plugins/loader.py", line 1131, in get_with_context
raise KeyError('invalid plugin name: {0}'.format(key))
KeyError: 'invalid plugin name: ansible.builtin.=='
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/path/to/ansible/lib/ansible/executor/task_executor.py", line 525, in _execute
self._task.post_validate(templar=templar)
File "/path/to/ansible/lib/ansible/playbook/task.py", line 291, in post_validate
super(Task, self).post_validate(templar)
File "/path/to/ansible/lib/ansible/playbook/base.py", line 525, in post_validate
value = templar.template(getattr(self, name))
File "/path/to/ansible/lib/ansible/template/__init__.py", line 755, in template
d[k] = self.template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 729, in template
result = self.do_template(
File "/path/to/ansible/lib/ansible/template/__init__.py", line 992, in do_template
res = self.environment.concat(rf)
File "/path/to/ansible/lib/ansible/template/native_helpers.py", line 44, in ansible_eval_concat
head = list(islice(nodes, 2))
File "<template>", line 17, in root
File "/path/to/ansible/lib/ansible/template/__init__.py", line 264, in wrapper
return list(ret)
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1765, in select_or_reject
if func(item):
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1750, in <lambda>
return lambda item: modfunc(func(transfunc(item)))
File "/usr/lib/python3.10/site-packages/jinja2/filters.py", line 1745, in func
return context.environment.call_test(name, item, args, kwargs)
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 589, in call_test
return self._filter_test_common(
File "/usr/lib/python3.10/site-packages/jinja2/environment.py", line 510, in _filter_test_common
func = env_map.get(name) # type: ignore
File "/usr/lib/python3.10/_collections_abc.py", line 819, in get
return self[key]
File "/path/to/ansible/lib/ansible/template/__init__.py", line 440, in __getitem__
raise TemplateSyntaxError('Could not load "%s": %s' % (key, to_native(e)), 0)
jinja2.exceptions.TemplateSyntaxError: Could not load "==": 'invalid plugin name: ansible.builtin.=='
line 0
localhost | FAILED! => {
"changed": false
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78912
|
https://github.com/ansible/ansible/pull/78913
|
367cdae3b279a5281a56808827af27c8883a4ad4
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
| 2022-09-28T18:53:10Z |
python
| 2022-10-04T13:44:00Z |
test/integration/targets/templating/templates/invalid_test_name.j2
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/community/collection_contributors/collection_integration_about.rst
|
.. _collection_integration_tests_about:
Understanding integration tests
=================================
.. note::
Some collections do not have integration tests.
Integration tests are functional tests of modules and plugins.
With integration tests, we check if a module or plugin satisfies its functional requirements. Simply put, we check that features work as expected and users get the outcome described in the module or plugin documentation.
There are :ref:`two kinds of integration tests <collections_adding_integration_test>` used in collections:
* integration tests that use Ansible roles
* integration tests that use ``runme.sh``.
This section focuses on integration tests that use Ansible roles.
Integration tests check modules with playbooks that invoke those modules. The tests pass standalone parameters and their combinations, check what the module or plugin reports with the :ref:`assert <ansible_collections.ansible.builtin.assert_module>` module, and the actual state of the system after each task.
Integration test example
-------------------------
Let's say we want to test the ``postgresql_user`` module invoked with the ``name`` parameter. We expect that the module will both create a user based on the provided value of the ``name`` parameter and will report that the system state has changed. We cannot rely on only what the module reports. To be sure that the user has been created, we query our database with another module to see if the user exists.
.. code-block:: yaml
- name: Create PostgreSQL user and store module's output to the result variable
postgresql_user:
name: test_user
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Check actual system state with another module, in other words, that the user exists
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user'
register: query_result
- name: We expect it returns one row, check it
assert:
that:
- query_result.rowcount == 1
Details about integration tests
--------------------------------
The basic entity of an Ansible integration test is a ``target``. The target is an :ref:`Ansible role <playbooks_reuse_roles>` stored in the ``tests/integration/targets`` directory of the collection repository. The target role contains everything that is needed to test a module.
The names of targets contain the module or plugin name that they test. Target names that start with ``setup_`` are usually executed as dependencies before module and plugin targets start execution. See :ref:`collection_creating_integration_tests` for details.
To run integration tests, we use the ``ansible-test`` utility that is included in the ``ansible-core`` and ``ansible`` packages. See :ref:`collection_run_integration_tests` for details. After you finish your integration tests, see to :ref:`collection_quickstart` to learn how to submit a pull request.
.. _collection_integration_prepare:
Preparing for integration tests for collections
=================================================
To prepare for developing integration tests:
#. :ref:`Set up your local environment <collection_prepare_environment>`.
#. Determine if integration tests already exist.
.. code-block:: bash
ansible-test integration --list-targets
If a collection already has integration tests, they are stored in ``tests/integration/targets/*`` subdirectories of the collection repository.
If you use ``bash`` and the ``argcomplete`` package is installed with ``pip`` on your system, you can also get a full target list.
.. code-block:: shell
ansible-test integration <tab><tab>
Alternately, you can check if the ``tests/integration/targets`` directory contains a corresponding directory with the same name as the module. For example, the tests for the ``postgresql_user`` module of the ``community.postgresql`` collection are stored in the ``tests/integration/targets/postgresql_user`` directory of the collection repository. If there is no corresponding target there, then that module does not have integration tests. In this case, consider adding integration tests for the module. See :ref:`collection_creating_integration_tests` for details.
.. _collection_integration_recommendations:
Recommendations on coverage
===========================
Bugfixes
--------
Before fixing code, create a test case in an :ref:`appropriate test target<collection_integration_prepare>` that reproduces the bug provided by the issue reporter and described in the ``Steps to Reproduce`` issue section. :ref:`Run <collection_run_integration_tests>` the tests.
If you failed to reproduce the bug, ask the reporter to provide additional information. The issue may be related to environment settings. Sometimes specific environment issues cannot be reproduced in integration tests, in that case, manual testing by issue reporter or other interested users is required.
Refactoring code
----------------
When refactoring code, always check that related options are covered in a :ref:`corresponding test target<collection_integration_prepare>`. Do not assume if the test target exists, everything is covered.
.. _collections_recommendation_modules:
Covering modules / new features
-------------------------------
When covering a module, cover all its options separately and their meaningful combinations. Every possible use of the module should be tested against:
- Idempotency - Does rerunning a task report no changes?
- Check-mode - Does dry-running a task behave the same as a real run? Does it not make any changes?
- Return values - Does the module return values consistently under different conditions?
Each test action has to be tested at least the following times:
- Perform an action in check-mode if supported. This should indicate a change.
- Check with another module that the changes have ``not`` been actually made.
- Perform the action for real. This should indicate a change.
- Check with another module that the changes have been actually made.
- Perform the action again in check-mode. This should indicate ``no`` change.
- Perform the action again for real. This should indicate ``no`` change.
To check a task:
1. Register the outcome of the task as a variable, for example, ``register: result``. Using the :ref:`assert <ansible_collections.ansible.builtin.assert_module>` module, check:
#. If ``- result is changed`` or not.
#. Expected return values.
2. If the module changes the system state, check the actual system state using at least one other module. For example, if the module changes a file, we can check that the file has been changed by checking its checksum with the :ref:`stat <ansible_collections.ansible.builtin.stat_module>` module before and after the test tasks.
3. Run the same task with ``check_mode: yes`` if check-mode is supported by the module. Check with other modules that the actual system state has not been changed.
4. Cover cases when the module must fail. Use the ``ignore_errors: yes`` option and check the returned message with the ``assert`` module.
Example:
.. code-block:: yaml
- name: Task to fail
abstract_module:
...
register: result
ignore_errors: yes
- name: Check the task fails and its error message
assert:
that:
- result is failed
- result.msg == 'Message we expect'
Here is a summary:
- Cover options and their sensible combinations.
- Check returned values.
- Cover check-mode if supported.
- Check a system state using other modules.
- Check when a module must fail and error messages.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/community/collection_contributors/collection_integration_add.rst
|
.. _collection_creating_integration_tests:
Creating new integration tests
=================================
This section covers the following cases:
- There are no integration tests for a collection / group of modules in a collection at all.
- You are adding a new module and you want to include integration tests.
- You want to add integration tests for a module that already exists without integration tests.
In other words, there are currently no tests for a module regardless of whether the module exists or not.
If the module already has tests, see :ref:`collection_updating_integration_tests`.
Simplified example
--------------------
Here is a simplified abstract example.
Let's say we are going to add integration tests to a new module in the ``community.abstract`` collection which interacts with some service.
We :ref:`checked<collection_integration_prepare>` and determined that there are no integration tests at all.
We should basically do the following:
1. Install and run the service with a ``setup`` target.
2. Create a test target.
3. Add integration tests for the module.
4. :ref:`Run the tests<collection_run_integration_tests>`.
5. Fix the code and tests as needed, run the tests again, and repeat the cycle until they pass.
.. note::
You can reuse the ``setup`` target when implementing other targets that also use the same service.
1. Clone the collection to the ``~/ansible_collections/community.abstract`` directory on your local machine.
2. From the ``~/ansble_collections/community.abstract`` directory, create directories for the ``setup`` target:
.. code-block:: bash
mkdir -p tests/integration/targets/setup_abstract_service/tasks
3. Write all the tasks needed to prepare the environment, install, and run the service.
For simplicity, let's imagine that the service is available in the native distribution repositories and no sophisticated environment configuration is required.
Add the following tasks to the ``tests/integration/targets/setup_abstract_service/tasks/main.yml`` file to install and run the service:
.. code-block:: yaml
- name: Install abstract service
package:
name: abstract_service
- name: Run the service
systemd:
name: abstract_service
state: started
This is a very simplified example.
4. Add the target for the module you are testing.
Let's say the module is called ``abstact_service_info``. Create the following directory structure in the target:
.. code-block:: bash
mkdir -p tests/integration/targets/abstract_service_info/tasks
mkdir -p tests/integration/targets/abstract_service_info/meta
Add all of the needed subdirectories. For example, if you are going to use defaults and files, add the ``defaults`` and ``files`` directories, and so on. The approach is the same as when you are creating a role.
5. To make the ``setup_abstract_service`` target run before the module's target, add the following lines to the ``tests/integration/targets/abstract_service_info/meta/main.yml`` file.
.. code-block:: yaml
dependencies:
- setup_abstract_service
6. Start with writing a single stand-alone task to check that your module can interact with the service.
We assume that the ``abstract_service_info`` module fetches some information from the ``abstract_service`` and that it has two connection parameters.
Among other fields, it returns a field called ``version`` containing a service version.
Add the following to ``tests/integration/targets/abstract_service_info/tasks/main.yml``:
.. code-block:: yaml
- name: Fetch info from abstract service
anstract_service_info:
host: 127.0.0.1 # We assume the service accepts local connection by default
port: 1234 # We assume that the service is listening this port by default
register: result # This variable will contain the returned JSON including the server version
- name: Test the output
assert:
that:
- result.version == '1.0.0' # Check version field contains what we expect
7. :ref:`Run the tests<collection_run_integration_tests>` with the ``-vvv`` argument.
If there are any issues with connectivity (for example, the service is not accepting connections) or with the code, the play will fail.
Examine the output to see at which step the failure occurred. Investigate the reason, fix, and run again. Repeat the cycle until the test passes.
8. If the test succeeds, write more tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.
``community.postgresql`` example
--------------------------------
Here is a real example of writing integration tests from scratch for the ``community.postgresql.postgresql_info`` module.
For the sake of simplicity, we will create very basic tests which we will run using the Ubuntu 20.04 test container.
We use ``Linux`` as a work environment and have ``git`` and ``docker`` installed and running.
We also installed ``ansible-core``.
1. Create the following directories in your home directory:
.. code-block:: bash
mkdir -p ~/ansible_collections/community
2. Fork the `collection repository <https://github.com/ansible-collections/community.postgresql>`_ through the GitHub web interface.
3. Clone the forked repository from your profile to the created path:
.. code-block:: bash
git clone https://github.com/YOURACC/community.postgresql.git ~/ansible_collections/community/postgresql
If you prefer to use the SSH protocol:
.. code-block:: bash
git clone [email protected]:YOURACC/community.postgresql.git ~/ansible_collections/community/postgresql
4. Go to the cloned repository:
.. code-block:: bash
cd ~/ansible_collections/community/postgresql
5. Be sure you are in the default branch:
.. code-block:: bash
git status
6. Checkout a test branch:
.. code-block:: bash
git checkout -b postgresql_info_tests
7. Since we already have tests for the ``postgresql_info`` module, we will run the following command:
.. code-block:: bash
rm -rf tests/integration/targets/*
With all of the targets now removed, the current state is as if we do not have any integration tests for the ``community.postgresql`` collection at all. We can now start writing integration tests from scratch.
8. We will start with creating a ``setup`` target that will install all required packages and will launch PostgreSQL. Create the following directories:
.. code-block:: bash
mkdir -p tests/integration/targets/setup_postgresql_db/tasks
9. Create the ``tests/integration/targets/setup_postgresql_db/tasks/main.yml`` file and add the following tasks to it:
.. code-block:: yaml
- name: Install required packages
package:
name:
- apt-utils
- postgresql
- postgresql-common
- python3-psycopg2
- name: Initialize PostgreSQL
shell: . /usr/share/postgresql-common/maintscripts-functions && set_system_locale && /usr/bin/pg_createcluster -u postgres 12 main
args:
creates: /etc/postgresql/12/
- name: Start PostgreSQL service
service:
name: postgresql
state: started
That is enough for our very basic example.
10. Then, create the following directories for the ``postgresql_info`` target:
.. code-block:: bash
mkdir -p tests/integration/targets/postgresql_info/tasks tests/integration/targets/postgresql_info/meta
11. To make the ``setup_postgresql_db`` target running before the ``postgresql_info`` target as a dependency, create the ``tests/integration/targets/postgresql_info/meta/main.yml`` file and add the following code to it:
.. code-block:: yaml
dependencies:
- setup_postgresql_db
12. Now we are ready to add our first test task for the ``postgresql_info`` module. Create the ``tests/integration/targets/postgresql_info/tasks/main.yml`` file and add the following code to it:
.. code-block:: yaml
- name: Test postgresql_info module
become: yes
become_user: postgres
postgresql_info:
login_user: postgres
login_db: postgres
register: result
- name: Check the module returns what we expect
assert:
that:
- result is not changed
- result.version.major == 12
- result.version.minor == 8
In the first task, we run the ``postgresql_info`` module to fetch information from the database we installed and launched with the ``setup_postgresql_db`` target. We are saving the values returned by the module into the ``result`` variable.
In the second task, we check the ``result`` variable, which is what the first task returned, with the ``assert`` module. We expect that, among other things, the result has the version and reports that the system state has not been changed.
13. Run the tests in the Ubuntu 20.04 docker container:
.. code-block:: bash
ansible-test integration postgresql_info --docker ubuntu2004 -vvv
The tests should pass. If we look at the output, we should see something like the following:
.. code-block:: shell
TASK [postgresql_info : Check the module returns what we expect] ***************
ok: [testhost] => {
"changed": false,
"msg": "All assertions passed"
}
If your tests fail when you are working on your project, examine the output to see at which step the failure occurred. Investigate the reason, fix, and run again. Repeat the cycle until the test passes. If the test succeeds, write more tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/community/collection_contributors/collection_integration_updating.rst
|
.. _collection_updating_integration_tests:
Adding to an existing integration test
=======================================
The test tasks are stored in the ``tests/integration/targets/<target_name>/tasks`` directory.
The ``main.yml`` file holds test tasks and includes other test files.
Look for a suitable test file to integrate your tests or create and include / import a separate test file.
You can use one of the existing test files as a draft.
When fixing a bug
-----------------
When fixing a bug:
1. :ref:`Determine if integration tests for the module exist<collection_integration_prepare>`. If they do not, see :ref:`collection_creating_integration_tests` section.
2. Add a task which reproduces the bug to an appropriate file within the ``tests/integration/targets/<target_name>/tasks`` directory.
3. :ref:`Run the tests<collection_run_integration_tests>`. The newly added task should fail.
4. If they do not fail, re-check if your environment / test task satisfies the conditions described in the ``Steps to Reproduce`` section of the issue.
5. If you reproduce the bug and tests fail, change the code.
6. :ref:`Run the tests<collection_run_integration_tests>` again.
7. If they fail, repeat steps 5-6 until the tests pass.
Here is an example.
Let's say someone reported an issue in the ``community.postgresql`` collection that when users pass a name containing underscores to the ``postgresql_user`` module, the module fails.
We cloned the collection repository to the ``~/ansible_collections/community/postgresql`` directory and :ref:`prepared our environment <collection_prepare_environment>`. From the collection's root directory, we run ``ansible-test integration --list-targets`` and it shows a target called ``postgresql_user``. It means that we already have tests for the module.
We start with reproducing the bug.
First, we look into the ``tests/integration/targets/postgresql_user/tasks/main.yml`` file. In this particular case, the file imports other files from the ``tasks`` directory. The ``postgresql_user_general.yml`` looks like an appropriate one to add our tests.
.. code-block:: yaml
# General tests:
- import_tasks: postgresql_user_general.yml
when: postgres_version_resp.stdout is version('9.4', '>=')
We will add the following code to the file.
.. code-block:: yaml
# https://github.com/ansible-collections/community.postgresql/issues/NUM
- name: Test user name containing underscore
postgresql_user:
name: underscored_user
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user exists
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'underscored_user'
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
When we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql_user`` as a test target, this task must fail.
Now that we have our failing test; we will fix the bug and run the same tests again. Once the tests pass, we will consider the bug fixed and will submit a pull request.
When adding a new feature
-------------------------
.. note::
The process described in this section also applies when you want to add integration tests to a feature that already exists, but is missing integration tests.
If you have not already implemented the new feature, you can start with writing the integration tests for it. Of course they will not work as the code does not yet exist, but it can help you improve your implementation design before you start writing any code.
When adding new features, the process of adding tests consists of the following steps:
1. :ref:`Determine if integration tests for the module exists<collection_integration_prepare>`. If they do not, see :ref:`collection_creating_integration_tests`.
2. Find an appropriate file for your tests within the ``tests/integration/targets/<target_name>/tasks`` directory.
3. Cover your feature with tests. Refer to the :ref:`Recommendations on coverage<collection_integration_recommendations>` section for details.
4. :ref:`Run the tests<collection_run_integration_tests>`.
5. If they fail, see the test output for details. Fix your code or tests and run the tests again.
6. Repeat steps 4-5 until the tests pass.
Here is an example.
Let's say we decided to add a new option called ``add_attribute`` to the ``postgresql_user`` module of the ``community.postgresql`` collection.
The option is boolean. If set to ``yes``, it adds an additional attribute to a database user.
We cloned the collection repository to the ``~/ansible_collections/community/postgresql`` directory and :ref:`prepared our environment<collection_integration_prepare>`. From the collection's root directory, we run ``ansible-test integration --list-targets`` and it shows a target called ``postgresql_user``. Therefore, we already have some tests for the module.
First, we look at the ``tests/integration/targets/<target_name>/tasks/main.yml`` file. In this particular case, the file imports other files from the ``tasks`` directory. The ``postgresql_user_general.yml`` file looks like an appropriate one to add our tests.
.. code-block:: yaml
# General tests:
- import_tasks: postgresql_user_general.yml
when: postgres_version_resp.stdout is version('9.4', '>=')
We will add the following code to the file.
.. code-block:: yaml
# https://github.com/ansible-collections/community.postgresql/issues/NUM
# We should also run the same tasks with check_mode: yes. We omit it here for simplicity.
- name: Test for new_option, create new user WITHOUT the attribute
postgresql_user:
name: test_user
add_attribute: no
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user exists but does not have the attribute (it is NULL)
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = NULL
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
- name: Test for new_option, create new user WITH the attribute
postgresql_user:
name: test_user
add_attribute: yes
register: result
- name: Check the module returns what we expect
assert:
that:
- result is changed
- name: Query the database if the user has the attribute (it is TRUE)
postgresql_query:
query: SELECT * FROM pg_authid WHERE rolename = 'test_user' AND attribute = 't'
register: result
- name: Check the database returns one row
assert:
that:
- result.query_result.rowcount == 1
Then we :ref:`run the tests<collection_run_integration_tests>` with ``postgresql_user`` passed as a test target.
In reality, we would alternate the tasks above with the same tasks run with the ``check_mode: yes`` option to be sure our option works as expected in check-mode as well. See :ref:`Recommendations on coverage<collection_integration_recommendations>` for details.
If we expect a task to fail, we use the ``ignore_errors: yes`` option and check that the task actually failed and returned the message we expect:
.. code-block:: yaml
- name: Test for fail_when_true option
postgresql_user:
name: test_user
fail_when_true: yes
register: result
ignore_errors: yes
- name: Check the module fails and returns message we expect
assert:
that:
- result is failed
- result.msg == 'The message we expect'
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/dev_guide/developing_modules_general_windows.rst
|
.. _developing_modules_general_windows:
**************************************
Windows module development walkthrough
**************************************
In this section, we will walk through developing, testing, and debugging an
Ansible Windows module.
Because Windows modules are written in Powershell and need to be run on a
Windows host, this guide differs from the usual development walkthrough guide.
What's covered in this section:
.. contents::
:local:
Windows environment setup
=========================
Unlike Python module development which can be run on the host that runs
Ansible, Windows modules need to be written and tested for Windows hosts.
While evaluation editions of Windows can be downloaded from
Microsoft, these images are usually not ready to be used by Ansible without
further modification. The easiest way to set up a Windows host so that it is
ready to by used by Ansible is to set up a virtual machine using Vagrant.
Vagrant can be used to download existing OS images called *boxes* that are then
deployed to a hypervisor like VirtualBox. These boxes can either be created and
stored offline or they can be downloaded from a central repository called
Vagrant Cloud.
This guide will use the Vagrant boxes created by the `packer-windoze <https://github.com/jborean93/packer-windoze>`_
repository which have also been uploaded to `Vagrant Cloud <https://app.vagrantup.com/boxes/search?utf8=%E2%9C%93&sort=downloads&provider=&q=jborean93>`_.
To find out more info on how these images are created, please go to the GitHub
repo and look at the ``README`` file.
Before you can get started, the following programs must be installed (please consult the Vagrant and
VirtualBox documentation for installation instructions):
- Vagrant
- VirtualBox
Create a Windows server in a VM
===============================
To create a single Windows Server 2016 instance, run the following:
.. code-block:: shell
vagrant init jborean93/WindowsServer2016
vagrant up
This will download the Vagrant box from Vagrant Cloud and add it to the local
boxes on your host and then start up that instance in VirtualBox. When starting
for the first time, the Windows VM will run through the sysprep process and
then create a HTTP and HTTPS WinRM listener automatically. Vagrant will finish
its process once the listeners are online, after which the VM can be used by Ansible.
Create an Ansible inventory
===========================
The following Ansible inventory file can be used to connect to the newly
created Windows VM:
.. code-block:: ini
[windows]
WindowsServer ansible_host=127.0.0.1
[windows:vars]
ansible_user=vagrant
ansible_password=vagrant
ansible_port=55986
ansible_connection=winrm
ansible_winrm_transport=ntlm
ansible_winrm_server_cert_validation=ignore
.. note:: The port ``55986`` is automatically forwarded by Vagrant to the
Windows host that was created, if this conflicts with an existing local
port then Vagrant will automatically use another one at random and display
show that in the output.
The OS that is created is based on the image set. The following
images can be used:
- `jborean93/WindowsServer2012 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012>`_
- `jborean93/WindowsServer2012R2 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2012R2>`_
- `jborean93/WindowsServer2016 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2016>`_
- `jborean93/WindowsServer2019 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2019>`_
- `jborean93/WindowsServer2022 <https://app.vagrantup.com/jborean93/boxes/WindowsServer2022>`_
When the host is online, it can accessible by RDP on ``127.0.0.1:3389`` but the
port may differ depending if there was a conflict. To get rid of the host, run
``vagrant destroy --force`` and Vagrant will automatically remove the VM and
any other files associated with that VM.
While this is useful when testing modules on a single Windows instance, these
host won't work without modification with domain based modules. The Vagrantfile
at `ansible-windows <https://github.com/jborean93/ansible-windows/tree/master/vagrant>`_
can be used to create a test domain environment to be used in Ansible. This
repo contains three files which are used by both Ansible and Vagrant to create
multiple Windows hosts in a domain environment. These files are:
- ``Vagrantfile``: The Vagrant file that reads the inventory setup of ``inventory.yml`` and provisions the hosts that are required
- ``inventory.yml``: Contains the hosts that are required and other connection information such as IP addresses and forwarded ports
- ``main.yml``: Ansible playbook called by Vagrant to provision the domain controller and join the child hosts to the domain
By default, these files will create the following environment:
- A single domain controller running on Windows Server 2016
- Five child hosts for each major Windows Server version joined to that domain
- A domain with the DNS name ``domain.local``
- A local administrator account on each host with the username ``vagrant`` and password ``vagrant``
- A domain admin account ``[email protected]`` with the password ``VagrantPass1``
The domain name and accounts can be modified by changing the variables
``domain_*`` in the ``inventory.yml`` file if it is required. The inventory
file can also be modified to provision more or less servers by changing the
hosts that are defined under the ``domain_children`` key. The host variable
``ansible_host`` is the private IP that will be assigned to the VirtualBox host
only network adapter while ``vagrant_box`` is the box that will be used to
create the VM.
Provisioning the environment
============================
To provision the environment as is, run the following:
.. code-block:: shell
git clone https://github.com/jborean93/ansible-windows.git
cd vagrant
vagrant up
.. note:: Vagrant provisions each host sequentially so this can take some time
to complete. If any errors occur during the Ansible phase of setting up the
domain, run ``vagrant provision`` to rerun just that step.
Unlike setting up a single Windows instance with Vagrant, these hosts can also
be accessed using the IP address directly as well as through the forwarded
ports. It is easier to access it over the host only network adapter as the
normal protocol ports are used, for example RDP is still over ``3389``. In cases where
the host cannot be resolved using the host only network IP, the following
protocols can be access over ``127.0.0.1`` using these forwarded ports:
- ``RDP``: 295xx
- ``SSH``: 296xx
- ``WinRM HTTP``: 297xx
- ``WinRM HTTPS``: 298xx
- ``SMB``: 299xx
Replace ``xx`` with the entry number in the inventory file where the domain
controller started with ``00`` and is incremented from there. For example, in
the default ``inventory.yml`` file, WinRM over HTTPS for ``SERVER2012R2`` is
forwarded over port ``29804`` as it's the fourth entry in ``domain_children``.
Windows new module development
==============================
When creating a new module there are a few things to keep in mind:
- Module code is in Powershell (.ps1) files while the documentation is contained in Python (.py) files of the same name
- Avoid using ``Write-Host/Debug/Verbose/Error`` in the module and add what needs to be returned to the ``$module.Result`` variable
- To fail a module, call ``$module.FailJson("failure message here")``, an Exception or ErrorRecord can be set to the second argument for a more descriptive error message
- You can pass in the exception or ErrorRecord as a second argument to ``FailJson("failure", $_)`` to get a more detailed output
- Most new modules require check mode and integration tests before they are merged into the main Ansible codebase
- Avoid using try/catch statements over a large code block, rather use them for individual calls so the error message can be more descriptive
- Try and catch specific exceptions when using try/catch statements
- Avoid using PSCustomObjects unless necessary
- Look for common functions in ``./lib/ansible/module_utils/powershell/`` and use the code there instead of duplicating work. These can be imported by adding the line ``#Requires -Module *`` where * is the filename to import, and will be automatically included with the module code sent to the Windows target when run through Ansible
- As well as PowerShell module utils, C# module utils are stored in ``./lib/ansible/module_utils/csharp/`` and are automatically imported in a module execution if the line ``#AnsibleRequires -CSharpUtil *`` is present
- C# and PowerShell module utils achieve the same goal but C# allows a developer to implement low level tasks, such as calling the Win32 API, and can be faster in some cases
- Ensure the code runs under Powershell v3 and higher on Windows Server 2012 and higher; if higher minimum Powershell or OS versions are required, ensure the documentation reflects this clearly
- Ansible runs modules under strictmode version 2.0. Be sure to test with that enabled by putting ``Set-StrictMode -Version 2.0`` at the top of your dev script
- Favor native Powershell cmdlets over executable calls if possible
- Use the full cmdlet name instead of aliases, for example ``Remove-Item`` over ``rm``
- Use named parameters with cmdlets, for example ``Remove-Item -Path C:\temp`` over ``Remove-Item C:\temp``
A very basic Powershell module `win_environment <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_environment.ps1>`_ incorporates best practices for Powershell modules. It demonstrates how to implement check-mode and diff-support, and also shows a warning to the user when a specific condition is met.
A slightly more advanced module is `win_uri <https://github.com/ansible-collections/ansible.windows/blob/main/plugins/modules/win_uri.ps1>`_ which additionally shows how to use different parameter types (bool, str, int, list, dict, path) and a selection of choices for parameters, how to fail a module and how to handle exceptions.
As part of the new ``AnsibleModule`` wrapper, the input parameters are defined and validated based on an argument
spec. The following options can be set at the root level of the argument spec:
- ``mutually_exclusive``: A list of lists, where the inner list contains module options that cannot be set together
- ``no_log``: Stops the module from emitting any logs to the Windows Event log
- ``options``: A dictionary where the key is the module option and the value is the spec for that option
- ``required_by``: A dictionary where the option(s) specified by the value must be set if the option specified by the key is also set
- ``required_if``: A list of lists where the inner list contains 3 or 4 elements;
* The first element is the module option to check the value against
* The second element is the value of the option specified by the first element, if matched then the required if check is run
* The third element is a list of required module options when the above is matched
* An optional fourth element is a boolean that states whether all module options in the third elements are required (default: ``$false``) or only one (``$true``)
- ``required_one_of``: A list of lists, where the inner list contains module options where at least one must be set
- ``required_together``: A list of lists, where the inner list contains module options that must be set together
- ``supports_check_mode``: Whether the module supports check mode, by default this is ``$false``
The actual input options for a module are set within the ``options`` value as a dictionary. The keys of this dictionary
are the module option names while the values are the spec of that module option. Each spec can have the following
options set:
- ``aliases``: A list of aliases for the module option
- ``choices``: A list of valid values for the module option, if ``type=list`` then each list value is validated against the choices and not the list itself
- ``default``: The default value for the module option if not set
- ``deprecated_aliases``: A list of hashtables that define aliases that are deprecated and the versions they will be removed in. Each entry must contain the keys ``name`` and ``collection_name`` with either ``version`` or ``date``
- ``elements``: When ``type=list``, this sets the type of each list value, the values are the same as ``type``
- ``no_log``: Will sanitise the input value before being returned in the ``module_invocation`` return value
- ``removed_in_version``: States when a deprecated module option is to be removed, a warning is displayed to the end user if set
- ``removed_at_date``: States the date (YYYY-MM-DD) when a deprecated module option will be removed, a warning is displayed to the end user if set
- ``removed_from_collection``: States from which collection the deprecated module option will be removed; must be specified if one of ``removed_in_version`` and ``removed_at_date`` is specified
- ``required``: Will fail when the module option is not set
- ``type``: The type of the module option, if not set then it defaults to ``str``. The valid types are;
* ``bool``: A boolean value
* ``dict``: A dictionary value, if the input is a JSON or key=value string then it is converted to dictionary
* ``float``: A float or `Single <https://docs.microsoft.com/en-us/dotnet/api/system.single?view=netframework-4.7.2>`_ value
* ``int``: An Int32 value
* ``json``: A string where the value is converted to a JSON string if the input is a dictionary
* ``list``: A list of values, ``elements=<type>`` can convert the individual list value types if set. If ``elements=dict`` then ``options`` is defined, the values will be validated against the argument spec. When the input is a string then the string is split by ``,`` and any whitespace is trimmed
* ``path``: A string where values likes ``%TEMP%`` are expanded based on environment values. If the input value starts with ``\\?\`` then no expansion is run
* ``raw``: No conversions occur on the value passed in by Ansible
* ``sid``: Will convert Windows security identifier values or Windows account names to a `SecurityIdentifier <https://docs.microsoft.com/en-us/dotnet/api/system.security.principal.securityidentifier?view=netframework-4.7.2>`_ value
* ``str``: The value is converted to a string
When ``type=dict``, or ``type=list`` and ``elements=dict``, the following keys can also be set for that module option:
- ``apply_defaults``: The value is based on the ``options`` spec defaults for that key if ``True`` and null if ``False``. Only valid when the module option is not defined by the user and ``type=dict``.
- ``mutually_exclusive``: Same as the root level ``mutually_exclusive`` but validated against the values in the sub dict
- ``options``: Same as the root level ``options`` but contains the valid options for the sub option
- ``required_if``: Same as the root level ``required_if`` but validated against the values in the sub dict
- ``required_by``: Same as the root level ``required_by`` but validated against the values in the sub dict
- ``required_together``: Same as the root level ``required_together`` but validated against the values in the sub dict
- ``required_one_of``: Same as the root level ``required_one_of`` but validated against the values in the sub dict
A module type can also be a delegate function that converts the value to whatever is required by the module option. For
example the following snippet shows how to create a custom type that creates a ``UInt64`` value:
.. code-block:: powershell
$spec = @{
uint64_type = @{ type = [Func[[Object], [UInt64]]]{ [System.UInt64]::Parse($args[0]) } }
}
$uint64_type = $module.Params.uint64_type
When in doubt, look at some of the other core modules and see how things have been
implemented there.
Sometimes there are multiple ways that Windows offers to complete a task; this
is the order to favor when writing modules:
- Native Powershell cmdlets like ``Remove-Item -Path C:\temp -Recurse``
- .NET classes like ``[System.IO.Path]::GetRandomFileName()``
- WMI objects through the ``New-CimInstance`` cmdlet
- COM objects through ``New-Object -ComObject`` cmdlet
- Calls to native executables like ``Secedit.exe``
PowerShell modules support a small subset of the ``#Requires`` options built
into PowerShell as well as some Ansible-specific requirements specified by
``#AnsibleRequires``. These statements can be placed at any point in the script,
but are most commonly near the top. They are used to make it easier to state the
requirements of the module without writing any of the checks. Each ``requires``
statement must be on its own line, but there can be multiple requires statements
in one script.
These are the checks that can be used within Ansible modules:
- ``#Requires -Module Ansible.ModuleUtils.<module_util>``: Added in Ansible 2.4, specifies a module_util to load in for the module execution.
- ``#Requires -Version x.y``: Added in Ansible 2.5, specifies the version of PowerShell that is required by the module. The module will fail if this requirement is not met.
- ``#AnsibleRequires -PowerShell <module_util>``: Added in Ansible 2.8, like ``#Requires -Module``, this specifies a module_util to load in for module execution.
- ``#AnsibleRequires -CSharpUtil <module_util>``: Added in Ansible 2.8, specifies a C# module_util to load in for the module execution.
- ``#AnsibleRequires -OSVersion x.y``: Added in Ansible 2.5, specifies the OS build version that is required by the module and will fail if this requirement is not met. The actual OS version is derived from ``[Environment]::OSVersion.Version``.
- ``#AnsibleRequires -Become``: Added in Ansible 2.5, forces the exec runner to run the module with ``become``, which is primarily used to bypass WinRM restrictions. If ``ansible_become_user`` is not specified then the ``SYSTEM`` account is used instead.
The ``#AnsibleRequires -PowerShell`` and ``#AnsibleRequires -CSharpUtil``
support further features such as:
- Importing a util contained in a collection (added in Ansible 2.9)
- Importing a util by relative names (added in Ansible 2.10)
- Specifying the util is optional by adding `-Optional` to the import
declaration (added in Ansible 2.12).
See the below examples for more details:
.. code-block:: powershell
# Imports the PowerShell Ansible.ModuleUtils.Legacy provided by Ansible itself
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Legacy
# Imports the PowerShell my_util in the my_namesapce.my_name collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the PowerShell my_util that exists in the same collection as the current module
#AnsibleRequires -PowerShell ..module_utils.my_util
# Imports the PowerShell Ansible.ModuleUtils.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -PowerShell Ansible.ModuleUtils.Optional -Optional
# Imports the C# Ansible.Process provided by Ansible itself
#AnsibleRequires -CSharpUtil Ansible.Process
# Imports the C# my_util in the my_namespace.my_name collection
#AnsibleRequires -CSharpUtil ansible_collections.my_namespace.my_name.plugins.module_utils.my_util
# Imports the C# my_util that exists in the same collection as the current module
#AnsibleRequires -CSharpUtil ..module_utils.my_util
# Imports the C# Ansible.Optional provided by Ansible if it exists.
# If it does not exist then it will do nothing.
#AnsibleRequires -CSharpUtil Ansible.Optional -Optional
For optional require statements, it is up to the module code to then verify
whether the util has been imported before trying to use it. This can be done by
checking if a function or type provided by the util exists or not.
While both ``#Requires -Module`` and ``#AnsibleRequires -PowerShell`` can be
used to load a PowerShell module it is recommended to use ``#AnsibleRequires``.
This is because ``#AnsibleRequires`` supports collection module utils, imports
by relative util names, and optional util imports.
C# module utils can reference other C# utils by adding the line
``using Ansible.<module_util>;`` to the top of the script with all the other
using statements.
Windows module utilities
========================
Like Python modules, PowerShell modules also provide a number of module
utilities that provide helper functions within PowerShell. These module_utils
can be imported by adding the following line to a PowerShell module:
.. code-block:: powershell
#Requires -Module Ansible.ModuleUtils.Legacy
This will import the module_util at ``./lib/ansible/module_utils/powershell/Ansible.ModuleUtils.Legacy.psm1``
and enable calling all of its functions. As of Ansible 2.8, Windows module
utils can also be written in C# and stored at ``lib/ansible/module_utils/csharp``.
These module_utils can be imported by adding the following line to a PowerShell
module:
.. code-block:: powershell
#AnsibleRequires -CSharpUtil Ansible.Basic
This will import the module_util at ``./lib/ansible/module_utils/csharp/Ansible.Basic.cs``
and automatically load the types in the executing process. C# module utils can
reference each other and be loaded together by adding the following line to the
using statements at the top of the util:
.. code-block:: csharp
using Ansible.Become;
There are special comments that can be set in a C# file for controlling the
compilation parameters. The following comments can be added to the script;
- ``//AssemblyReference -Name <assembly dll> [-CLR [Core|Framework]]``: The assembly DLL to reference during compilation, the optional ``-CLR`` flag can also be used to state whether to reference when running under .NET Core, Framework, or both (if omitted)
- ``//NoWarn -Name <error id> [-CLR [Core|Framework]]``: A compiler warning ID to ignore when compiling the code, the optional ``-CLR`` works the same as above. A list of warnings can be found at `Compiler errors <https://docs.microsoft.com/en-us/dotnet/csharp/language-reference/compiler-messages/index>`_
As well as this, the following pre-processor symbols are defined;
- ``CORECLR``: This symbol is present when PowerShell is running through .NET Core
- ``WINDOWS``: This symbol is present when PowerShell is running on Windows
- ``UNIX``: This symbol is present when PowerShell is running on Unix
A combination of these flags help to make a module util interoperable on both
.NET Framework and .NET Core, here is an example of them in action:
.. code-block:: csharp
#if CORECLR
using Newtonsoft.Json;
#else
using System.Web.Script.Serialization;
#endif
//AssemblyReference -Name Newtonsoft.Json.dll -CLR Core
//AssemblyReference -Name System.Web.Extensions.dll -CLR Framework
// Ignore error CS1702 for all .NET types
//NoWarn -Name CS1702
// Ignore error CS1956 only for .NET Framework
//NoWarn -Name CS1956 -CLR Framework
The following is a list of module_utils that are packaged with Ansible and a general description of what
they do:
- ArgvParser: Utility used to convert a list of arguments to an escaped string compliant with the Windows argument parsing rules.
- CamelConversion: Utility used to convert camelCase strings/lists/dicts to snake_case.
- CommandUtil: Utility used to execute a Windows process and return the stdout/stderr and rc as separate objects.
- FileUtil: Utility that expands on the ``Get-ChildItem`` and ``Test-Path`` to work with special files like ``C:\pagefile.sys``.
- Legacy: General definitions and helper utilities for Ansible module.
- LinkUtil: Utility to create, remove, and get information about symbolic links, junction points and hard inks.
- SID: Utilities used to convert a user or group to a Windows SID and vice versa.
For more details on any specific module utility and their requirements, please see the `Ansible
module utilities source code <https://github.com/ansible/ansible/tree/devel/lib/ansible/module_utils/powershell>`_.
PowerShell module utilities can be stored outside of the standard Ansible
distribution for use with custom modules. Custom module_utils are placed in a
folder called ``module_utils`` located in the root folder of the playbook or role
directory.
C# module utilities can also be stored outside of the standard Ansible distribution for use with custom modules. Like
PowerShell utils, these are stored in a folder called ``module_utils`` and the filename must end in the extension
``.cs``, start with ``Ansible.`` and be named after the namespace defined in the util.
The below example is a role structure that contains two PowerShell custom module_utils called
``Ansible.ModuleUtils.ModuleUtil1``, ``Ansible.ModuleUtils.ModuleUtil2``, and a C# util containing the namespace
``Ansible.CustomUtil``:
.. code-block:: console
meta/
main.yml
defaults/
main.yml
module_utils/
Ansible.ModuleUtils.ModuleUtil1.psm1
Ansible.ModuleUtils.ModuleUtil2.psm1
Ansible.CustomUtil.cs
tasks/
main.yml
Each PowerShell module_util must contain at least one function that has been exported with ``Export-ModuleMember``
at the end of the file. For example
.. code-block:: powershell
Export-ModuleMember -Function Invoke-CustomUtil, Get-CustomInfo
Exposing shared module options
++++++++++++++++++++++++++++++
PowerShell module utils can easily expose common module options that a module can use when building its argument spec.
This allows common features to be stored and maintained in one location and have those features used by multiple
modules with minimal effort. Any new features or bugfixes added to one of these utils are then automatically used by
the various modules that call that util.
An example of this would be to have a module util that handles authentication and communication against an API This
util can be used by multiple modules to expose a common set of module options like the API endpoint, username,
password, timeout, cert validation, and so on without having to add those options to each module spec.
The standard convention for a module util that has a shared argument spec would have
- A ``Get-<namespace.name.util name>Spec`` function that outputs the common spec for a module
* It is highly recommended to make this function name be unique to the module to avoid any conflicts with other utils that can be loaded
* The format of the output spec is a Hashtable in the same format as the ``$spec`` used for normal modules
- A function that takes in an ``AnsibleModule`` object called under the ``-Module`` parameter which it can use to get the shared options
Because these options can be shared across various module it is highly recommended to keep the module option names and
aliases in the shared spec as specific as they can be. For example do not have a util option called ``password``,
rather you should prefix it with a unique name like ``acme_password``.
.. warning::
Failure to have a unique option name or alias can prevent the util being used by module that also use those names or
aliases for its own options.
The following is an example module util called ``ServiceAuth.psm1`` in a collection that implements a common way for
modules to authentication with a service.
.. code-block:: powershell
Invoke-MyServiceResource {
[CmdletBinding()]
param (
[Parameter(Mandatory=$true)]
[ValidateScript({ $_.GetType().FullName -eq 'Ansible.Basic.AnsibleModule' })]
$Module,
[Parameter(Mandatory=$true)]
[String]
$ResourceId,
[String]
$State = 'present'
)
# Process the common module options known to the util
$params = @{
ServerUri = $Module.Params.my_service_url
}
if ($Module.Params.my_service_username) {
$params.Credential = Get-MyServiceCredential
}
if ($State -eq 'absent') {
Remove-MyService @params -ResourceId $ResourceId
} else {
New-MyService @params -ResourceId $ResourceId
}
}
Get-MyNamespaceMyCollectionServiceAuthSpec {
# Output the util spec
@{
options = @{
my_service_url = @{ type = 'str'; required = $true }
my_service_username = @{ type = 'str' }
my_service_password = @{ type = 'str'; no_log = $true }
}
required_together = @(
,@('my_service_username', 'my_service_password')
)
}
}
$exportMembers = @{
Function = 'Get-MyNamespaceMyCollectionServiceAuthSpec', 'Invoke-MyServiceResource'
}
Export-ModuleMember @exportMembers
For a module to take advantage of this common argument spec it can be set out like
.. code-block:: powershell
#!powershell
# Include the module util ServiceAuth.psm1 from the my_namespace.my_collection collection
#AnsibleRequires -PowerShell ansible_collections.my_namespace.my_collection.plugins.module_utils.ServiceAuth
# Create the module spec like normal
$spec = @{
options = @{
resource_id = @{ type = 'str'; required = $true }
state = @{ type = 'str'; choices = 'absent', 'present' }
}
}
# Create the module from the module spec but also include the util spec to merge into our own.
$module = [Ansible.Basic.AnsibleModule]::Create($args, $spec, @(Get-MyNamespaceMyCollectionServiceAuthSpec))
# Call the ServiceAuth module util and pass in the module object so it can access the module options.
Invoke-MyServiceResource -Module $module -ResourceId $module.Params.resource_id -State $module.params.state
$module.ExitJson()
.. note::
Options defined in the module spec will always have precedence over a util spec. Any list values under the same key
in a util spec will be appended to the module spec for that same key. Dictionary values will add any keys that are
missing from the module spec and merge any values that are lists or dictionaries. This is similar to how the doc
fragment plugins work when extending module documentation.
To document these shared util options for a module, create a doc fragment plugin that documents the options implemented
by the module util and extend the module docs for every module that implements the util to include that fragment in
its docs.
Windows playbook module testing
===============================
You can test a module with an Ansible playbook. For example:
- Create a playbook in any directory ``touch testmodule.yml``.
- Create an inventory file in the same directory ``touch hosts``.
- Populate the inventory file with the variables required to connect to a Windows host(s).
- Add the following to the new playbook file:
.. code-block:: yaml
---
- name: test out windows module
hosts: windows
tasks:
- name: test out module
win_module:
name: test name
- Run the playbook ``ansible-playbook -i hosts testmodule.yml``
This can be useful for seeing how Ansible runs with
the new module end to end. Other possible ways to test the module are
shown below.
Windows debugging
=================
Debugging a module currently can only be done on a Windows host. This can be
useful when developing a new module or implementing bug fixes. These
are some steps that need to be followed to set this up:
- Copy the module script to the Windows server
- Copy the folders ``./lib/ansible/module_utils/powershell`` and ``./lib/ansible/module_utils/csharp`` to the same directory as the script above
- Add an extra ``#`` to the start of any ``#Requires -Module`` lines in the module code, this is only required for any lines starting with ``#Requires -Module``
- Add the following to the start of the module script that was copied to the server:
.. code-block:: powershell
# Set $ErrorActionPreference to what's set during Ansible execution
$ErrorActionPreference = "Stop"
# Set the first argument as the path to a JSON file that contains the module args
$args = @("$($pwd.Path)\args.json")
# Or instead of an args file, set $complex_args to the pre-processed module args
$complex_args = @{
_ansible_check_mode = $false
_ansible_diff = $false
path = "C:\temp"
state = "present"
}
# Import any C# utils referenced with '#AnsibleRequires -CSharpUtil' or 'using Ansible.;
# The $_csharp_utils entries should be the context of the C# util files and not the path
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.AddType.psm1"
$_csharp_utils = @(
[System.IO.File]::ReadAllText("$($pwd.Path)\csharp\Ansible.Basic.cs")
)
Add-CSharpType -References $_csharp_utils -IncludeDebugInfo
# Import any PowerShell modules referenced with '#Requires -Module`
Import-Module -Name "$($pwd.Path)\powershell\Ansible.ModuleUtils.Legacy.psm1"
# End of the setup code and start of the module code
#!powershell
You can add more args to ``$complex_args`` as required by the module or define the module options through a JSON file
with the structure:
.. code-block:: json
{
"ANSIBLE_MODULE_ARGS": {
"_ansible_check_mode": false,
"_ansible_diff": false,
"path": "C:\\temp",
"state": "present"
}
}
There are multiple IDEs that can be used to debug a Powershell script, two of
the most popular ones are
- `Powershell ISE`_
- `Visual Studio Code`_
.. _Powershell ISE: https://docs.microsoft.com/en-us/powershell/scripting/core-powershell/ise/how-to-debug-scripts-in-windows-powershell-ise
.. _Visual Studio Code: https://blogs.technet.microsoft.com/heyscriptingguy/2017/02/06/debugging-powershell-script-in-visual-studio-code-part-1/
To be able to view the arguments as passed by Ansible to the module follow
these steps.
- Prefix the Ansible command with :envvar:`ANSIBLE_KEEP_REMOTE_FILES=1<ANSIBLE_KEEP_REMOTE_FILES>` to specify that Ansible should keep the exec files on the server.
- Log onto the Windows server using the same user account that Ansible used to execute the module.
- Navigate to ``%TEMP%\..``. It should contain a folder starting with ``ansible-tmp-``.
- Inside this folder, open the PowerShell script for the module.
- In this script is a raw JSON script under ``$json_raw`` which contains the module arguments under ``module_args``. These args can be assigned manually to the ``$complex_args`` variable that is defined on your debug script or put in the ``args.json`` file.
Windows unit testing
====================
Currently there is no mechanism to run unit tests for Powershell modules under Ansible CI.
Windows integration testing
===========================
Integration tests for Ansible modules are typically written as Ansible roles. These test
roles are located in ``./test/integration/targets``. You must first set up your testing
environment, and configure a test inventory for Ansible to connect to.
In this example we will set up a test inventory to connect to two hosts and run the integration
tests for win_stat:
- Run the command ``source ./hacking/env-setup`` to prepare environment.
- Create a copy of ``./test/integration/inventory.winrm.template`` and name it ``inventory.winrm``.
- Fill in entries under ``[windows]`` and set the required variables that are needed to connect to the host.
- :ref:`Install the required Python modules <windows_winrm>` to support WinRM and a configured authentication method.
- To execute the integration tests, run ``ansible-test windows-integration win_stat``; you can replace ``win_stat`` with the role you want to test.
This will execute all the tests currently defined for that role. You can set
the verbosity level using the ``-v`` argument just as you would with
ansible-playbook.
When developing tests for a new module, it is recommended to test a scenario once in
check mode and twice not in check mode. This ensures that check mode
does not make any changes but reports a change, as well as that the second run is
idempotent and does not report changes. For example:
.. code-block:: yaml
- name: remove a file (check mode)
win_file:
path: C:\temp
state: absent
register: remove_file_check
check_mode: yes
- name: get result of remove a file (check mode)
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual_check
- name: assert remove a file (check mode)
assert:
that:
- remove_file_check is changed
- remove_file_actual_check.stdout == 'true\r\n'
- name: remove a file
win_file:
path: C:\temp
state: absent
register: remove_file
- name: get result of remove a file
win_command: powershell.exe "if (Test-Path -Path 'C:\temp') { 'true' } else { 'false' }"
register: remove_file_actual
- name: assert remove a file
assert:
that:
- remove_file is changed
- remove_file_actual.stdout == 'false\r\n'
- name: remove a file (idempotent)
win_file:
path: C:\temp
state: absent
register: remove_file_again
- name: assert remove a file (idempotent)
assert:
that:
- not remove_file_again is changed
Windows communication and development support
=============================================
Join the ``#ansible-devel`` or ``#ansible-windows`` chat channels (using Matrix at ansible.im or using IRC at `irc.libera.chat <https://libera.chat/>`_) for discussions about Ansible development for Windows.
For questions and discussions pertaining to using the Ansible product,
use the ``#ansible`` channel.
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/plugins/inventory.rst
|
.. _inventory_plugins:
Inventory plugins
=================
.. contents::
:local:
:depth: 2
Inventory plugins allow users to point at data sources to compile the inventory of hosts that Ansible uses to target tasks, either using the ``-i /path/to/file`` and/or ``-i 'host1, host2'`` command line parameters or from other configuration sources. If necessary, you can :ref:`create custom inventory plugins <developing_inventory_plugins>`.
.. _enabling_inventory:
Enabling inventory plugins
--------------------------
Most inventory plugins shipped with Ansible are enabled by default or can be used by with the ``auto`` plugin.
In some circumstances, for example, if the inventory plugin does not use a YAML configuration file, you may need to enable the specific plugin. You can do this by setting ``enable_plugins`` in your :ref:`ansible.cfg <ansible_configuration_settings>` file in the ``[inventory]`` section. Modifying this will override the default list of enabled plugins. Here is the default list of enabled plugins that ships with Ansible:
.. code-block:: ini
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml
If the plugin is in a collection and is not being picked up by the `auto` statement, you can append the fully qualified name:
.. code-block:: ini
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml, namespace.collection_name.inventory_plugin_name
Or, if it is a local plugin, perhaps stored in the path set by :ref:`DEFAULT_INVENTORY_PLUGIN_PATH`, you could reference it as follows:
.. code-block:: ini
[inventory]
enable_plugins = host_list, script, auto, yaml, ini, toml, my_plugin
If you use a plugin that supports a YAML configuration source, make sure that the name matches the name provided in the ``plugin`` entry of the inventory source file.
.. _using_inventory:
Using inventory plugins
-----------------------
To use an inventory plugin, you must provide an inventory source. Most of the time this is a file containing host information or a YAML configuration file with options for the plugin. You can use the ``-i`` flag to provide inventory sources or configure a default inventory path.
.. code-block:: bash
ansible hostname -i inventory_source -m ansible.builtin.ping
To start using an inventory plugin with a YAML configuration source, create a file with the accepted filename schema documented for the plugin in question, then add ``plugin: plugin_name``. Use the fully qualified name if the plugin is in a collection.
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
Each plugin should document any naming restrictions. In addition, the YAML config file must end with the extension ``yml`` or ``yaml`` to be enabled by default with the ``auto`` plugin (otherwise, see the section above on enabling plugins).
After providing any required options, you can view the populated inventory with ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@ungrouped:
If you are using an inventory plugin in a playbook-adjacent collection and want to test your setup with ``ansible-inventory``, use the ``--playbook-dir`` flag.
Your inventory source might be a directory of inventory configuration files. The constructed inventory plugin only operates on those hosts already in inventory, so you may want the constructed inventory configuration parsed at a particular point (such as last). Ansible parses the directory recursively, alphabetically. You cannot configure the parsing approach, so name your files to make it work predictably. Inventory plugins that extend constructed features directly can work around that restriction by adding constructed options in addition to the inventory plugin options. Otherwise, you can use ``-i`` with multiple sources to impose a specific order, for example ``-i demo.aws_ec2.yml -i clouds.yml -i constructed.yml``.
You can create dynamic groups using host variables with the constructed ``keyed_groups`` option. The option ``groups`` can also be used to create groups and ``compose`` creates and modifies host variables. Here is an aws_ec2 example utilizing constructed features:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
regions:
- us-east-1
- us-east-2
keyed_groups:
# add hosts to tag_Name_value groups for each aws_ec2 host's tags.Name variable
- key: tags.Name
prefix: tag_Name_
separator: ""
# If you have a tag called "Role" which has the value "Webserver", this will add the group
# role_Webserver and add any hosts that have that tag assigned to it.
- key: tags.Role
prefix: role
groups:
# add hosts to the group development if any of the dictionary's keys or values is the word 'devel'
development: "'devel' in (tags|list)"
# add hosts to the "private_only" group if the host doesn't have a public IP associated to it
private_only: "public_ip_address is not defined"
compose:
# use a private address where a public one isn't assigned
ansible_host: public_ip_address|default(private_ip_address)
# alternatively, set the ansible_host variable to connect with the private IP address without changing the hostname
# ansible_host: private_ip_address
# if you *must* set a string here (perhaps to identify the inventory source if you have multiple
# accounts you want to use as sources), you need to wrap this in two sets of quotes, either ' then "
# or " then '
some_inventory_wide_string: '"Yes, you need both types of quotes here"'
Now the output of ``ansible-inventory -i demo.aws_ec2.yml --graph``:
.. code-block:: text
@all:
|--@aws_ec2:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
| |--...
|--@development:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@role_Webserver
| |--ec2-12-345-678-901.compute-1.amazonaws.com
|--@tag_Name_ECS_Instance:
| |--ec2-98-765-432-10.compute-1.amazonaws.com
|--@tag_Name_Test_Server:
| |--ec2-12-345-678-901.compute-1.amazonaws.com
|--@ungrouped
If a host does not have the variables in the configuration above (in other words, ``tags.Name``, ``tags``, ``private_ip_address``), the host will not be added to groups other than those that the inventory plugin creates and the ``ansible_host`` host variable will not be modified.
Inventory plugins that support caching can use the general settings for the fact cache defined in the ``ansible.cfg`` file's ``[defaults]`` section or define inventory-specific settings in the ``[inventory]`` section. Individual plugins can define plugin-specific cache settings in their config file:
.. code-block:: yaml
# demo.aws_ec2.yml
plugin: amazon.aws.aws_ec2
cache: yes
cache_plugin: ansible.builtin.jsonfile
cache_timeout: 7200
cache_connection: /tmp/aws_inventory
cache_prefix: aws_ec2
Here is an example of setting inventory caching with some fact caching defaults for the cache plugin used and the timeout in an ``ansible.cfg`` file:
.. code-block:: ini
[defaults]
fact_caching = ansible.builtin.jsonfile
fact_caching_connection = /tmp/ansible_facts
cache_timeout = 3600
[inventory]
cache = yes
cache_connection = /tmp/ansible_inventory
.. _inventory_plugin_list:
Plugin list
-----------
You can use ``ansible-doc -t inventory -l`` to see the list of available plugins.
Use ``ansible-doc -t inventory <plugin name>`` to see plugin-specific documentation and examples.
.. seealso::
:ref:`about_playbooks`
An introduction to playbooks
:ref:`callback_plugins`
Callback plugins
:ref:`connection_plugins`
Connection plugins
:ref:`filter_plugins`
Filter plugins
:ref:`test_plugins`
Test plugins
:ref:`lookup_plugins`
Lookup plugins
:ref:`vars_plugins`
Vars plugins
`User Mailing List <https://groups.google.com/group/ansible-devel>`_
Have a question? Stop by the google group!
:ref:`communication_irc`
How to join Ansible chat channels
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,925 |
Docs: various files - replace boolean yes/no with true/false
|
### Summary
Based on the [steering committee vote to use true/false for booleans,](https://github.com/ansible-community/community-topics/discussions/120) we are going through the guides to adapt to this change.
This issue requests these changes to the files listed in a followon comment.
Changes are: change `yes` to `true` and `no` to `false`
must be lowercase. Please open one PR to handle these changes. It should impact 6 files. NOTE - ansibot does not like PRs over 50 files.
The following grep was used to create the list.
`grep -R '\: \(yes\|no\)$' --exclude-dir=locales`
NOTE: there are multiple issues open to change these booleans so please limit your changes to the list below so we do not have clashing PRs.
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide/developing_modules_general_windows.rstt
### Ansible Version
```console
$ ansible --version
2.14
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
none
```
### OS / Environment
none
### Additional Information
none
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78925
|
https://github.com/ansible/ansible/pull/78980
|
6d0aeac1e166842f2833f4fb64c727cc7f818118
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
| 2022-09-29T14:38:19Z |
python
| 2022-10-04T14:29:26Z |
docs/docsite/rst/tips_tricks/sample_setup.rst
|
.. _sample_setup:
********************
Sample Ansible setup
********************
You have learned about playbooks, inventory, roles, and variables. This section combines all those elements, and outlines a sample setup for automating a web service. You can find more example playbooks that illustrate these patterns in our `ansible-examples repository <https://github.com/ansible/ansible-examples>`_. (NOTE: These examples do not use all of the latest features, but are still an excellent reference.).
The sample setup organizes playbooks, roles, inventory, and files with variables by function. Tags at the play and task level provide greater granularity and control. This is a powerful and flexible approach, but there are other ways to organize Ansible content. Your usage of Ansible should fit your needs, so feel free to modify this approach and organize your content accordingly.
.. contents::
:local:
Sample directory layout
-----------------------
This layout organizes most tasks in roles, with a single inventory file for each environment and a few playbooks in the top-level directory:
.. code-block:: console
production # inventory file for production servers
staging # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
library/ # if any custom modules, put them here (optional)
module_utils/ # if any custom module_utils to support modules, put them here (optional)
filter_plugins/ # if any custom filter plugins, put them here (optional)
site.yml # main playbook
webservers.yml # playbook for webserver tier
dbservers.yml # playbook for dbserver tier
tasks/ # task files included from playbooks
webservers-extra.yml # <-- avoids confusing playbook with task files
.. include:: shared_snippets/role_directory.txt
.. note:: By default, Ansible assumes your playbooks are stored in one directory with roles stored in a sub-directory called ``roles/``. With more tasks to automate, you can consider moving your playbooks into a sub-directory called ``playbooks/``. If you do this, you must configure the path to your ``roles/`` directory using the ``roles_path`` setting in the ``ansible.cfg`` file.
Alternative directory layout
----------------------------
You can also put each inventory file with its ``group_vars``/``host_vars`` in a separate directory. This is particularly useful if your ``group_vars``/``host_vars`` do not have that much in common in different environments. The layout could look like this example:
.. code-block:: console
inventories/
production/
hosts # inventory file for production servers
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
hostname1.yml # here we assign variables to particular systems
hostname2.yml
staging/
hosts # inventory file for staging environment
group_vars/
group1.yml # here we assign variables to particular groups
group2.yml
host_vars/
stagehost1.yml # here we assign variables to particular systems
stagehost2.yml
library/
module_utils/
filter_plugins/
site.yml
webservers.yml
dbservers.yml
roles/
common/
webtier/
monitoring/
fooapp/
This layout gives you more flexibility for larger environments, as well as a total separation of inventory variables between different environments. However, this approach is harder to maintain, because there are more files. For more information on organizing group and host variables, see :ref:`splitting_out_vars`.
.. _groups_and_hosts:
Sample group and host variables
-------------------------------
These sample group and host files with variables contain the values that apply to each machine or a group of machines. For instance, the data center in Atlanta has its own NTP servers. As a result, when setting up the ``ntp.conf`` file, you could use similar code as in this example:
.. code-block:: yaml
---
# file: group_vars/atlanta
ntp: ntp-atlanta.example.com
backup: backup-atlanta.example.com
Similarly, hosts in the webservers group have some configuration that does not apply to the database servers:
.. code-block:: yaml
---
# file: group_vars/webservers
apacheMaxRequestsPerChild: 3000
apacheMaxClients: 900
Default values, or values that are universally true, belong in a file called ``group_vars/all``:
.. code-block:: yaml
---
# file: group_vars/all
ntp: ntp-boston.example.com
backup: backup-boston.example.com
If necessary, you can define specific hardware variance in systems in the ``host_vars`` directory:
.. code-block:: yaml
---
# file: host_vars/db-bos-1.example.com
foo_agent_port: 86
bar_agent_port: 99
If you use :ref:`dynamic inventory <dynamic_inventory>`, Ansible creates many dynamic groups automatically. As a result, a tag like ``class:webserver`` will load in variables from the file ``group_vars/ec2_tag_class_webserver`` automatically.
.. note:: You can access host variables with a special variable called ``hostvars``. See :ref:`special_variables` for a list of these variables. The ``hostvars`` variable can access only host-specific variables, not group variables.
.. _split_by_role:
Sample playbooks organized by function
--------------------------------------
With this setup, a single playbook can define the entire infrastructure. The ``site.yml`` playbook imports two other playbooks. One for the webservers and one for the database servers:
.. code-block:: yaml
---
# file: site.yml
- import_playbook: webservers.yml
- import_playbook: dbservers.yml
The ``webservers.yml`` playbook, also at the top level, maps the configuration of the webservers group to the roles related to the webservers group:
.. code-block:: yaml
---
# file: webservers.yml
- hosts: webservers
roles:
- common
- webtier
With this setup, you can configure your entire infrastructure by running ``site.yml``. Alternatively, to configure just a portion of your infrastructure, run ``webservers.yml``. This is similar to the Ansible ``--limit`` parameter but a little more explicit:
.. code-block:: shell
ansible-playbook site.yml --limit webservers
ansible-playbook webservers.yml
.. _role_organization:
Sample task and handler files in a function-based role
------------------------------------------------------
Ansible loads any file called ``main.yml`` in a role sub-directory. This sample ``tasks/main.yml`` file configures NTP:
.. code-block:: yaml
---
# file: roles/common/tasks/main.yml
- name: be sure ntp is installed
yum:
name: ntp
state: present
tags: ntp
- name: be sure ntp is configured
template:
src: ntp.conf.j2
dest: /etc/ntp.conf
notify:
- restart ntpd
tags: ntp
- name: be sure ntpd is running and enabled
service:
name: ntpd
state: started
enabled: yes
tags: ntp
Here is an example handlers file. Handlers are only triggered when certain tasks report changes. Handlers run at the end of each play:
.. code-block:: yaml
---
# file: roles/common/handlers/main.yml
- name: restart ntpd
service:
name: ntpd
state: restarted
See :ref:`playbooks_reuse_roles` for more information.
.. _organization_examples:
What the sample setup enables
-----------------------------
The basic organizational structure described above enables a lot of different automation options. To reconfigure your entire infrastructure:
.. code-block:: shell
ansible-playbook -i production site.yml
To reconfigure NTP on everything:
.. code-block:: shell
ansible-playbook -i production site.yml --tags ntp
To reconfigure only the webservers:
.. code-block:: shell
ansible-playbook -i production webservers.yml
To reconfigure only the webservers in Boston:
.. code-block:: shell
ansible-playbook -i production webservers.yml --limit boston
To reconfigure only the first 10 webservers in Boston, and then the next 10:
.. code-block:: shell
ansible-playbook -i production webservers.yml --limit boston[0:9]
ansible-playbook -i production webservers.yml --limit boston[10:19]
The sample setup also supports basic ad hoc commands:
.. code-block:: shell
ansible boston -i production -m ping
ansible boston -i production -m command -a '/sbin/reboot'
To discover what tasks would run or what hostnames would be affected by a particular Ansible command:
.. code-block:: shell
# confirm what task names would be run if I ran this command and said "just ntp tasks"
ansible-playbook -i production webservers.yml --tags ntp --list-tasks
# confirm what hostnames might be communicated with if I said "limit to boston"
ansible-playbook -i production webservers.yml --limit boston --list-hosts
.. _dep_vs_config:
Organizing for deployment or configuration
------------------------------------------
The sample setup illustrates a typical configuration topology. When you do multi-tier deployments, you will likely need some additional playbooks that hop between tiers to roll out an application. In this case, you can augment ``site.yml`` with playbooks like ``deploy_exampledotcom.yml``. However, the general concepts still apply. With Ansible you can deploy and configure using the same utility. Therefore, you will probably reuse groups and keep the OS configuration in separate playbooks or roles from the application deployment.
Consider "playbooks" as a sports metaphor -- you can have one set of plays to use against all your infrastructure. Then you have situational plays that you use at different times and for different purposes.
.. _ship_modules_with_playbooks:
Using local Ansible modules
---------------------------
If a playbook has a :file:`./library` directory relative to its YAML file, you can use this directory to add Ansible modules automatically to the module path. This organizes modules with playbooks. For example, see the directory structure at the start of this section.
.. seealso::
:ref:`yaml_syntax`
Learn about YAML syntax
:ref:`working_with_playbooks`
Review the basic playbook features
:ref:`list_of_collections`
Browse existing collections, modules, and plugins
:ref:`developing_modules`
Learn how to extend Ansible by writing your own modules
:ref:`intro_patterns`
Learn about how to select hosts
`GitHub examples directory <https://github.com/ansible/ansible-examples>`_
Complete playbook files from the github project source
`Mailing List <https://groups.google.com/group/ansible-project>`_
Questions? Help? Ideas? Stop by the list on Google Groups
|
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,932 |
ansible-vault encrypt_string output is missing a new line, causing broken output on copy
|
### Summary
```
$ ansible-vault encrypt_string secret
Encryption successful
!vault |
$ANSIBLE_VAULT;1.1;AES256
61633966613365393435663962393261376338336136313065376437373838636336623565363239
6630383564363634636364613830613561316333623739380a653764333864666363663539363363
65316331636165353761626461386166633330623835316362393361343333396234663638653666
3163373561623331340a633161373437343563333135343933376634643638613332643964313135
3430%
```
If you look at the end of the printed data, it contains a `%`, which is a character that is added by the terminal in order to inform user that the executed command did not end-up with a newline.
Now you can easily guess that when user will copy/paste that yaml block, their will have a surprise, as decoding will not work,.... because of the extra `%` at the end.
This bug can easily be avoided by ensuring that we add a newline at the end of the dumped data, as this will not affect the validity of the dumped YAML, but will avoid having the terminal add the `%` character at the end of the last line, a character which will render the entire block invalid for decryption.
### Issue Type
Bug Report
### Component Name
encrypt
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b1]
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/ansible
ansible collection location = /Users/ssbarnea/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/ssbarnea/.pyenv/versions/3.11-dev/bin/ansible
python version = 3.11.0rc2+ (heads/3.11:8e2bda8227, Sep 19 2022, 10:59:25) [Clang 14.0.0 (clang-1400.0.29.102)] (/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
-
```
### OS / Environment
MacOS
### Steps to Reproduce
Just configure a ANSIBLE_VAULT_PASSWORD_FILE=.vault_pass and put a password inside the vault file, so you can easily encrypt and decypt.
Try to encrypt a secret like `ansible-vault encrypt_string secret`, select the output and paste it inside a playbook for use.
### Expected Results
Be able to copy/paste the output and be able to use it.
### Actual Results
```console
Vault format unhexlify error: Odd-length string
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78932
|
https://github.com/ansible/ansible/pull/79017
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
|
b5db71e3189418738785ab82509b97f9bc82d6d6
| 2022-09-29T20:34:10Z |
python
| 2022-10-04T15:05:37Z |
changelogs/fragments/79017-ansible-vault-string-encryption-ending-with-newline.yml
| |
closed
|
ansible/ansible
|
https://github.com/ansible/ansible
| 78,932 |
ansible-vault encrypt_string output is missing a new line, causing broken output on copy
|
### Summary
```
$ ansible-vault encrypt_string secret
Encryption successful
!vault |
$ANSIBLE_VAULT;1.1;AES256
61633966613365393435663962393261376338336136313065376437373838636336623565363239
6630383564363634636364613830613561316333623739380a653764333864666363663539363363
65316331636165353761626461386166633330623835316362393361343333396234663638653666
3163373561623331340a633161373437343563333135343933376634643638613332643964313135
3430%
```
If you look at the end of the printed data, it contains a `%`, which is a character that is added by the terminal in order to inform user that the executed command did not end-up with a newline.
Now you can easily guess that when user will copy/paste that yaml block, their will have a surprise, as decoding will not work,.... because of the extra `%` at the end.
This bug can easily be avoided by ensuring that we add a newline at the end of the dumped data, as this will not affect the validity of the dumped YAML, but will avoid having the terminal add the `%` character at the end of the last line, a character which will render the entire block invalid for decryption.
### Issue Type
Bug Report
### Component Name
encrypt
### Ansible Version
```console
$ ansible --version
ansible [core 2.14.0b1]
config file = /Users/ssbarnea/.ansible.cfg
configured module search path = ['/Users/ssbarnea/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/ssbarnea/.pyenv/versions/3.11-dev/lib/python3.11/site-packages/ansible
ansible collection location = /Users/ssbarnea/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/ssbarnea/.pyenv/versions/3.11-dev/bin/ansible
python version = 3.11.0rc2+ (heads/3.11:8e2bda8227, Sep 19 2022, 10:59:25) [Clang 14.0.0 (clang-1400.0.29.102)] (/Users/ssbarnea/.pyenv/versions/3.11-dev/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
-
```
### OS / Environment
MacOS
### Steps to Reproduce
Just configure a ANSIBLE_VAULT_PASSWORD_FILE=.vault_pass and put a password inside the vault file, so you can easily encrypt and decypt.
Try to encrypt a secret like `ansible-vault encrypt_string secret`, select the output and paste it inside a playbook for use.
### Expected Results
Be able to copy/paste the output and be able to use it.
### Actual Results
```console
Vault format unhexlify error: Odd-length string
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct
|
https://github.com/ansible/ansible/issues/78932
|
https://github.com/ansible/ansible/pull/79017
|
56285b1d2bd6dd4ae8ec63fcabcbdba76c4a1cf5
|
b5db71e3189418738785ab82509b97f9bc82d6d6
| 2022-09-29T20:34:10Z |
python
| 2022-10-04T15:05:37Z |
lib/ansible/cli/vault.py
|
#!/usr/bin/env python
# (c) 2014, James Tanner <[email protected]>
# Copyright: (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# PYTHON_ARGCOMPLETE_OK
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
# ansible.cli needs to be imported first, to ensure the source bin/* scripts run that code first
from ansible.cli import CLI
import os
import sys
from ansible import constants as C
from ansible import context
from ansible.cli.arguments import option_helpers as opt_help
from ansible.errors import AnsibleOptionsError
from ansible.module_utils._text import to_text, to_bytes
from ansible.parsing.dataloader import DataLoader
from ansible.parsing.vault import VaultEditor, VaultLib, match_encrypt_secret
from ansible.utils.display import Display
display = Display()
class VaultCLI(CLI):
''' can encrypt any structured data file used by Ansible.
This can include *group_vars/* or *host_vars/* inventory variables,
variables loaded by *include_vars* or *vars_files*, or variable files
passed on the ansible-playbook command line with *-e @file.yml* or *-e @file.json*.
Role variables and defaults are also included!
Because Ansible tasks, handlers, and other objects are data, these can also be encrypted with vault.
If you'd like to not expose what variables you are using, you can keep an individual task file entirely encrypted.
'''
name = 'ansible-vault'
FROM_STDIN = "stdin"
FROM_ARGS = "the command line args"
FROM_PROMPT = "the interactive prompt"
def __init__(self, args):
self.b_vault_pass = None
self.b_new_vault_pass = None
self.encrypt_string_read_stdin = False
self.encrypt_secret = None
self.encrypt_vault_id = None
self.new_encrypt_secret = None
self.new_encrypt_vault_id = None
super(VaultCLI, self).__init__(args)
def init_parser(self):
super(VaultCLI, self).init_parser(
desc="encryption/decryption utility for Ansible data files",
epilog="\nSee '%s <command> --help' for more information on a specific command.\n\n" % os.path.basename(sys.argv[0])
)
common = opt_help.argparse.ArgumentParser(add_help=False)
opt_help.add_vault_options(common)
opt_help.add_verbosity_options(common)
subparsers = self.parser.add_subparsers(dest='action')
subparsers.required = True
output = opt_help.argparse.ArgumentParser(add_help=False)
output.add_argument('--output', default=None, dest='output_file',
help='output file name for encrypt or decrypt; use - for stdout',
type=opt_help.unfrack_path())
# For encrypting actions, we can also specify which of multiple vault ids should be used for encrypting
vault_id = opt_help.argparse.ArgumentParser(add_help=False)
vault_id.add_argument('--encrypt-vault-id', default=[], dest='encrypt_vault_id',
action='store', type=str,
help='the vault id used to encrypt (required if more than one vault-id is provided)')
create_parser = subparsers.add_parser('create', help='Create new vault encrypted file', parents=[vault_id, common])
create_parser.set_defaults(func=self.execute_create)
create_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
decrypt_parser = subparsers.add_parser('decrypt', help='Decrypt vault encrypted file', parents=[output, common])
decrypt_parser.set_defaults(func=self.execute_decrypt)
decrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
edit_parser = subparsers.add_parser('edit', help='Edit vault encrypted file', parents=[vault_id, common])
edit_parser.set_defaults(func=self.execute_edit)
edit_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
view_parser = subparsers.add_parser('view', help='View vault encrypted file', parents=[common])
view_parser.set_defaults(func=self.execute_view)
view_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
encrypt_parser = subparsers.add_parser('encrypt', help='Encrypt YAML file', parents=[common, output, vault_id])
encrypt_parser.set_defaults(func=self.execute_encrypt)
encrypt_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
enc_str_parser = subparsers.add_parser('encrypt_string', help='Encrypt a string', parents=[common, output, vault_id])
enc_str_parser.set_defaults(func=self.execute_encrypt_string)
enc_str_parser.add_argument('args', help='String to encrypt', metavar='string_to_encrypt', nargs='*')
enc_str_parser.add_argument('-p', '--prompt', dest='encrypt_string_prompt',
action='store_true',
help="Prompt for the string to encrypt")
enc_str_parser.add_argument('--show-input', dest='show_string_input', default=False, action='store_true',
help='Do not hide input when prompted for the string to encrypt')
enc_str_parser.add_argument('-n', '--name', dest='encrypt_string_names',
action='append',
help="Specify the variable name")
enc_str_parser.add_argument('--stdin-name', dest='encrypt_string_stdin_name',
default=None,
help="Specify the variable name for stdin")
rekey_parser = subparsers.add_parser('rekey', help='Re-key a vault encrypted file', parents=[common, vault_id])
rekey_parser.set_defaults(func=self.execute_rekey)
rekey_new_group = rekey_parser.add_mutually_exclusive_group()
rekey_new_group.add_argument('--new-vault-password-file', default=None, dest='new_vault_password_file',
help="new vault password file for rekey", type=opt_help.unfrack_path())
rekey_new_group.add_argument('--new-vault-id', default=None, dest='new_vault_id', type=str,
help='the new vault identity to use for rekey')
rekey_parser.add_argument('args', help='Filename', metavar='file_name', nargs='*')
def post_process_args(self, options):
options = super(VaultCLI, self).post_process_args(options)
display.verbosity = options.verbosity
if options.vault_ids:
for vault_id in options.vault_ids:
if u';' in vault_id:
raise AnsibleOptionsError("'%s' is not a valid vault id. The character ';' is not allowed in vault ids" % vault_id)
if getattr(options, 'output_file', None) and len(options.args) > 1:
raise AnsibleOptionsError("At most one input file may be used with the --output option")
if options.action == 'encrypt_string':
if '-' in options.args or not options.args or options.encrypt_string_stdin_name:
self.encrypt_string_read_stdin = True
# TODO: prompting from stdin and reading from stdin seem mutually exclusive, but verify that.
if options.encrypt_string_prompt and self.encrypt_string_read_stdin:
raise AnsibleOptionsError('The --prompt option is not supported if also reading input from stdin')
return options
def run(self):
super(VaultCLI, self).run()
loader = DataLoader()
# set default restrictive umask
old_umask = os.umask(0o077)
vault_ids = list(context.CLIARGS['vault_ids'])
# there are 3 types of actions, those that just 'read' (decrypt, view) and only
# need to ask for a password once, and those that 'write' (create, encrypt) that
# ask for a new password and confirm it, and 'read/write (rekey) that asks for the
# old password, then asks for a new one and confirms it.
default_vault_ids = C.DEFAULT_VAULT_IDENTITY_LIST
vault_ids = default_vault_ids + vault_ids
action = context.CLIARGS['action']
# TODO: instead of prompting for these before, we could let VaultEditor
# call a callback when it needs it.
if action in ['decrypt', 'view', 'rekey', 'edit']:
vault_secrets = self.setup_vault_secrets(loader, vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'])
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
if action in ['encrypt', 'encrypt_string', 'create']:
encrypt_vault_id = None
# no --encrypt-vault-id context.CLIARGS['encrypt_vault_id'] for 'edit'
if action not in ['edit']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
vault_secrets = None
vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=vault_ids,
vault_password_files=list(context.CLIARGS['vault_password_files']),
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if len(vault_secrets) > 1 and not encrypt_vault_id:
raise AnsibleOptionsError("The vault-ids %s are available to encrypt. Specify the vault-id to encrypt with --encrypt-vault-id" %
','.join([x[0] for x in vault_secrets]))
if not vault_secrets:
raise AnsibleOptionsError("A vault password is required to use Ansible's Vault")
encrypt_secret = match_encrypt_secret(vault_secrets,
encrypt_vault_id=encrypt_vault_id)
# only one secret for encrypt for now, use the first vault_id and use its first secret
# TODO: exception if more than one?
self.encrypt_vault_id = encrypt_secret[0]
self.encrypt_secret = encrypt_secret[1]
if action in ['rekey']:
encrypt_vault_id = context.CLIARGS['encrypt_vault_id'] or C.DEFAULT_VAULT_ENCRYPT_IDENTITY
# print('encrypt_vault_id: %s' % encrypt_vault_id)
# print('default_encrypt_vault_id: %s' % default_encrypt_vault_id)
# new_vault_ids should only ever be one item, from
# load the default vault ids if we are using encrypt-vault-id
new_vault_ids = []
if encrypt_vault_id:
new_vault_ids = default_vault_ids
if context.CLIARGS['new_vault_id']:
new_vault_ids.append(context.CLIARGS['new_vault_id'])
new_vault_password_files = []
if context.CLIARGS['new_vault_password_file']:
new_vault_password_files.append(context.CLIARGS['new_vault_password_file'])
new_vault_secrets = \
self.setup_vault_secrets(loader,
vault_ids=new_vault_ids,
vault_password_files=new_vault_password_files,
ask_vault_pass=context.CLIARGS['ask_vault_pass'],
create_new_password=True)
if not new_vault_secrets:
raise AnsibleOptionsError("A new vault password is required to use Ansible's Vault rekey")
# There is only one new_vault_id currently and one new_vault_secret, or we
# use the id specified in --encrypt-vault-id
new_encrypt_secret = match_encrypt_secret(new_vault_secrets,
encrypt_vault_id=encrypt_vault_id)
self.new_encrypt_vault_id = new_encrypt_secret[0]
self.new_encrypt_secret = new_encrypt_secret[1]
loader.set_vault_secrets(vault_secrets)
# FIXME: do we need to create VaultEditor here? its not reused
vault = VaultLib(vault_secrets)
self.editor = VaultEditor(vault)
context.CLIARGS['func']()
# and restore umask
os.umask(old_umask)
def execute_encrypt(self):
''' encrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading plaintext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
# Fixme: use the correct vau
self.editor.encrypt_file(f, self.encrypt_secret,
vault_id=self.encrypt_vault_id,
output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
@staticmethod
def format_ciphertext_yaml(b_ciphertext, indent=None, name=None):
indent = indent or 10
block_format_var_name = ""
if name:
block_format_var_name = "%s: " % name
block_format_header = "%s!vault |" % block_format_var_name
lines = []
vault_ciphertext = to_text(b_ciphertext)
lines.append(block_format_header)
for line in vault_ciphertext.splitlines():
lines.append('%s%s' % (' ' * indent, line))
yaml_ciphertext = '\n'.join(lines)
return yaml_ciphertext
def execute_encrypt_string(self):
''' encrypt the supplied string using the provided vault secret '''
b_plaintext = None
# Holds tuples (the_text, the_source_of_the_string, the variable name if its provided).
b_plaintext_list = []
# remove the non-option '-' arg (used to indicate 'read from stdin') from the candidate args so
# we don't add it to the plaintext list
args = [x for x in context.CLIARGS['args'] if x != '-']
# We can prompt and read input, or read from stdin, but not both.
if context.CLIARGS['encrypt_string_prompt']:
msg = "String to encrypt: "
name = None
name_prompt_response = display.prompt('Variable name (enter for no name): ')
# TODO: enforce var naming rules?
if name_prompt_response != "":
name = name_prompt_response
# TODO: could prompt for which vault_id to use for each plaintext string
# currently, it will just be the default
hide_input = not context.CLIARGS['show_string_input']
if hide_input:
msg = "String to encrypt (hidden): "
else:
msg = "String to encrypt:"
prompt_response = display.prompt(msg, private=hide_input)
if prompt_response == '':
raise AnsibleOptionsError('The plaintext provided from the prompt was empty, not encrypting')
b_plaintext = to_bytes(prompt_response)
b_plaintext_list.append((b_plaintext, self.FROM_PROMPT, name))
# read from stdin
if self.encrypt_string_read_stdin:
if sys.stdout.isatty():
display.display("Reading plaintext input from stdin. (ctrl-d to end input, twice if your content does not already have a newline)", stderr=True)
stdin_text = sys.stdin.read()
if stdin_text == '':
raise AnsibleOptionsError('stdin was empty, not encrypting')
if sys.stdout.isatty() and not stdin_text.endswith("\n"):
display.display("\n")
b_plaintext = to_bytes(stdin_text)
# defaults to None
name = context.CLIARGS['encrypt_string_stdin_name']
b_plaintext_list.append((b_plaintext, self.FROM_STDIN, name))
# use any leftover args as strings to encrypt
# Try to match args up to --name options
if context.CLIARGS.get('encrypt_string_names', False):
name_and_text_list = list(zip(context.CLIARGS['encrypt_string_names'], args))
# Some but not enough --name's to name each var
if len(args) > len(name_and_text_list):
# Trying to avoid ever showing the plaintext in the output, so this warning is vague to avoid that.
display.display('The number of --name options do not match the number of args.',
stderr=True)
display.display('The last named variable will be "%s". The rest will not have'
' names.' % context.CLIARGS['encrypt_string_names'][-1],
stderr=True)
# Add the rest of the args without specifying a name
for extra_arg in args[len(name_and_text_list):]:
name_and_text_list.append((None, extra_arg))
# if no --names are provided, just use the args without a name.
else:
name_and_text_list = [(None, x) for x in args]
# Convert the plaintext text objects to bytestrings and collect
for name_and_text in name_and_text_list:
name, plaintext = name_and_text
if plaintext == '':
raise AnsibleOptionsError('The plaintext provided from the command line args was empty, not encrypting')
b_plaintext = to_bytes(plaintext)
b_plaintext_list.append((b_plaintext, self.FROM_ARGS, name))
# TODO: specify vault_id per string?
# Format the encrypted strings and any corresponding stderr output
outputs = self._format_output_vault_strings(b_plaintext_list, vault_id=self.encrypt_vault_id)
b_outs = []
for output in outputs:
err = output.get('err', None)
out = output.get('out', '')
if err:
sys.stderr.write(err)
b_outs.append(to_bytes(out))
self.editor.write_data(b'\n'.join(b_outs), context.CLIARGS['output_file'] or '-')
if sys.stdout.isatty():
display.display("Encryption successful", stderr=True)
# TODO: offer block or string ala eyaml
def _format_output_vault_strings(self, b_plaintext_list, vault_id=None):
# If we are only showing one item in the output, we don't need to included commented
# delimiters in the text
show_delimiter = False
if len(b_plaintext_list) > 1:
show_delimiter = True
# list of dicts {'out': '', 'err': ''}
output = []
# Encrypt the plaintext, and format it into a yaml block that can be pasted into a playbook.
# For more than one input, show some differentiating info in the stderr output so we can tell them
# apart. If we have a var name, we include that in the yaml
for index, b_plaintext_info in enumerate(b_plaintext_list):
# (the text itself, which input it came from, its name)
b_plaintext, src, name = b_plaintext_info
b_ciphertext = self.editor.encrypt_bytes(b_plaintext, self.encrypt_secret, vault_id=vault_id)
# block formatting
yaml_text = self.format_ciphertext_yaml(b_ciphertext, name=name)
err_msg = None
if show_delimiter:
human_index = index + 1
if name:
err_msg = '# The encrypted version of variable ("%s", the string #%d from %s).\n' % (name, human_index, src)
else:
err_msg = '# The encrypted version of the string #%d from %s.)\n' % (human_index, src)
output.append({'out': yaml_text, 'err': err_msg})
return output
def execute_decrypt(self):
''' decrypt the supplied file using the provided vault secret '''
if not context.CLIARGS['args'] and sys.stdin.isatty():
display.display("Reading ciphertext input from stdin", stderr=True)
for f in context.CLIARGS['args'] or ['-']:
self.editor.decrypt_file(f, output_file=context.CLIARGS['output_file'])
if sys.stdout.isatty():
display.display("Decryption successful", stderr=True)
def execute_create(self):
''' create and open a file in an editor that will be encrypted with the provided vault secret when closed'''
if len(context.CLIARGS['args']) != 1:
raise AnsibleOptionsError("ansible-vault create can take only one filename argument")
self.editor.create_file(context.CLIARGS['args'][0], self.encrypt_secret,
vault_id=self.encrypt_vault_id)
def execute_edit(self):
''' open and decrypt an existing vaulted file in an editor, that will be encrypted again when closed'''
for f in context.CLIARGS['args']:
self.editor.edit_file(f)
def execute_view(self):
''' open, decrypt and view an existing vaulted file using a pager using the supplied vault secret '''
for f in context.CLIARGS['args']:
# Note: vault should return byte strings because it could encrypt
# and decrypt binary files. We are responsible for changing it to
# unicode here because we are displaying it and therefore can make
# the decision that the display doesn't have to be precisely what
# the input was (leave that to decrypt instead)
plaintext = self.editor.plaintext(f)
self.pager(to_text(plaintext))
def execute_rekey(self):
''' re-encrypt a vaulted file with a new secret, the previous secret is required '''
for f in context.CLIARGS['args']:
# FIXME: plumb in vault_id, use the default new_vault_secret for now
self.editor.rekey_file(f, self.new_encrypt_secret,
self.new_encrypt_vault_id)
display.display("Rekey successful", stderr=True)
def main(args=None):
VaultCLI.cli_executor(args)
if __name__ == '__main__':
main()
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.